US8218770B2 - Method and apparatus for secure key management and protection - Google Patents

Method and apparatus for secure key management and protection Download PDF

Info

Publication number
US8218770B2
US8218770B2 US11/539,327 US53932706A US8218770B2 US 8218770 B2 US8218770 B2 US 8218770B2 US 53932706 A US53932706 A US 53932706A US 8218770 B2 US8218770 B2 US 8218770B2
Authority
US
United States
Prior art keywords
data
memory
key
processor
decryption
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US11/539,327
Other versions
US20070195957A1 (en
Inventor
Ambalavanar Arulambalam
David E. Clune
Nevin C. Heintze
Michael James Hunter
Hakan I. Pekcan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies International Sales Pte Ltd
Original Assignee
Agere Systems LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/226,507 external-priority patent/US7599364B2/en
Priority claimed from US11/273,750 external-priority patent/US7461214B2/en
Priority claimed from US11/364,979 external-priority patent/US20070204076A1/en
Priority claimed from US11/384,975 external-priority patent/US7912060B1/en
Application filed by Agere Systems LLC filed Critical Agere Systems LLC
Priority to US11/539,327 priority Critical patent/US8218770B2/en
Publication of US20070195957A1 publication Critical patent/US20070195957A1/en
Assigned to AGERE SYSTEMS INC. reassignment AGERE SYSTEMS INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEINTZE, NEVIN C., ARULAMBALAM, AMBALAVANAR, CLUNE, DAVID E, HUNTER, MICHAEL JAMES, PEKCAN, HAKAN
Application granted granted Critical
Publication of US8218770B2 publication Critical patent/US8218770B2/en
Assigned to DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT reassignment DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: AGERE SYSTEMS LLC, LSI CORPORATION
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AGERE SYSTEMS LLC
Assigned to LSI CORPORATION, AGERE SYSTEMS LLC reassignment LSI CORPORATION TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS (RELEASES RF 032856-0031) Assignors: DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS Assignors: BANK OF AMERICA, N.A., AS COLLATERAL AGENT
Assigned to AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED reassignment AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED MERGER (SEE DOCUMENT FOR DETAILS). Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Assigned to AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED reassignment AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED CORRECTIVE ASSIGNMENT TO CORRECT THE EFFECTIVE DATE OF MERGER TO 09/05/2018 PREVIOUSLY RECORDED AT REEL: 047230 FRAME: 0133. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER. Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Assigned to BROADCOM INTERNATIONAL PTE. LTD. reassignment BROADCOM INTERNATIONAL PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED
Assigned to AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED reassignment AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED MERGER (SEE DOCUMENT FOR DETAILS). Assignors: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED, BROADCOM INTERNATIONAL PTE. LTD.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • G06F21/72Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information in cryptographic circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0894Escrow, recovery or storing of secret information, e.g. secret key escrow or cryptographic key storage
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2209/00Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
    • H04L2209/60Digital content management, e.g. content distribution
    • H04L2209/603Digital right managament [DRM]

Definitions

  • the present invention relates to security mechanisms for network attached media streaming systems.
  • DRM digital rights management
  • Keys used for Encryption/Decryption are derived from various intermediate keys to ultimately determine a title key for a media file.
  • a master key will unlock a device key and, using the device key, a media key is unlocked. Using this media key, a title key is discovered. In this process it is important that the decrypted keys are not exposed to users or processes outside the device to be used by a hacker.
  • Embodiments of the present invention provide a server for transferring data packets of streaming data sessions between playback devices.
  • the server includes a protocol accelerator that, for received data packets, i) extracts header fields of the packets, ii) determines, based on the header fields, a destination for the packets, and iii) provides the packets to the destination.
  • the protocol accelerator i) groups the data into packets, ii) generates header fields for the packets, and iii) provides the packets to the network.
  • a control processor processes data.
  • a memory arbiter manages accesses to shared memory that buffers data and stores keys corresponding to the data sessions.
  • a storage medium stores media files corresponding to the data sessions.
  • a key manager includes i) a first memory for storing a master key of the server, ii) a second memory for storing one or more keys corresponding to the data sessions, and iii) a processor to encrypt and decrypt data.
  • FIG. 1 is a block diagram of an exemplary network attached server (NAS) system for streaming media in accordance to embodiments of the present invention
  • FIG. 2 is a block diagram of an exemplary ULP accelerator of the system shown in FIG. 1 ;
  • FIG. 3 is a block diagram of an exemplary TMA module of the system shown in FIG. 1 ;
  • FIG. 4 is a block diagram of a secure key management system in accordance with embodiments of the present invention.
  • FIG. 5 is a block diagram of an exemplary home network attached storage (NAS) server including the secure key management system of FIG. 4 ;
  • NAS home network attached storage
  • FIG. 6 is a data flow diagram showing exemplary data flows during a key decryption and data decryption operation in accordance with embodiments of the present invention.
  • FIG. 7 is a flow chart showing a method of decrypting data in accordance with exemplary embodiments of the present invention.
  • FIG. 1 is a block diagram of an exemplary home media server and network attached storage (NAS) system 10 for a home media server application, which might be implemented as a system on a chip (SOC).
  • NAS system 10 is connected to input sources, such as via USB port 130 or network port 131 , and one or more mass storage devices, such as a hard disk drive (HDD) array 141 .
  • HDD hard disk drive
  • data from multiple sessions are concurrently stored to disk array 141 , or played out to devices (e.g., PCs, TVs, digital video recorders (DVRs), personal video recorders (PVRs), and the like, not shown) on a home network via USB port 130 or network port 131 .
  • USB port 130 and network port 131 might also be used for control traffic.
  • a receive session is a connection in which data is being received from a media device, reassembled and stored in disk array 141 (or other mass storage device), and a transmit session is a connection in which data is being read out from disk array 141 to a media device (e.g., TV, stereo, computer or the like) for playback.
  • a control session is a connection in which data is transferred between a network and application processor (AP) 150 for processor functions that operate NAS system 10 (e.g., retrieving data or instructions from shared memory 110 , reading from or writing to registers).
  • AP application processor
  • AP 150 might be an embedded ARM926EJ-S core by ARM Holdings, plc, Cambridge, UK, or any other embedded microprocessor.
  • AP 150 is coupled to other elements of the system by at least one of two different buses: instruction bus 174 and data bus 172 .
  • instruction and data buses 174 and 172 are AMBA AHB buses.
  • AP 150 is coupled to Traffic Manger Arbitrator (TMA) 100 and flash memory 152 via instruction bus 174 and data bus 172 .
  • TMA 100 includes an exemplary memory controller interface 160 .
  • TMA 100 manages i) storage of media streams arriving via network port 131 , ii) handling of control traffic for application processing, and iii) playback traffic during retrieval from HDD array 141 .
  • TMA 100 controls the flow of all traffic among the network controller 165 , USB controller 164 , AP 150 , HDD array 141 , and shared memory 110 .
  • shared memory 110 is implemented by a single-port DDR-2 DRAM.
  • Double Data Rate (DDR) synchronous dynamic random access memory (SDRAM) is a high-bandwidth DRAM technology.
  • DDR Double Data Rate
  • SDRAM synchronous dynamic random access memory
  • Other types of memory might be used to implement shared memory 110 .
  • disk array 141 is implemented as a 4-channel Serial Advanced Technology Attachment (SATA) hard disk array, although other types of storage devices, such as Parallel Advanced Technology Attachment (PATA) hard disks, optical disks, or the like might be employed.
  • SATA Serial Advanced Technology Attachment
  • PATA Parallel Advanced Technology Attachment
  • AP 150 is also coupled, via a data bus 172 , to Gigabit Ethernet media access control (GbE MAC) network controller 165 , Upper Layer Protocol (ULP) accelerator 120 , RAID decoder/encoder (RDE) module 140 (where RAID denotes redundant array of inexpensive disks), USB controller 164 and multi drive controller (MDC) 142 .
  • GbE MAC gigabit Ethernet media access control
  • ULP Upper Layer Protocol
  • RDE RAID decoder/encoder
  • MDC multi drive controller
  • AP 150 accesses shared memory 110 for several reasons. Part of shared memory 110 might generally contain program instructions and data for AP 150 .
  • AHB Instruction Bus 174 might access shared memory 110 to get instruction/program data on behalf of AP 150 . Also, the control traffic destined for AP 150 inspection is stored in shared memory 110 .
  • AHB instruction bus 174 has read access to shared memory 110 , but the AHB data bus 172 is provided both read and write access to memory 110 .
  • AP 150 uses the write access to AHB data bus 172 to re-order data packets (e.g., TCP packets) received out-of-order. Also, AP 150 might insert data in and extract data from an existing packet stream in the shared memory 110 .
  • AHB data bus 172 and AHB instruction bus 174 access shared memory 110 on behalf of AP 150 frequently.
  • AHB data bus 172 is primarily used to access the internal register space and to access the data portion of the external shared memory.
  • AHB instruction bus 174 is used to access instructions specific to AP 150 , that are stored in shared memory 110 .
  • NAS system 10 receives media objects and control traffic from network port 131 and the objects/traffic are first processed by the local area network controller (e.g., Gigabit Ethernet controller GbE MAC 165 ) and ULP accelerator 120 .
  • ULP accelerator 120 transfers the media objects and control traffic to TMA 100 , and TMA 100 stores the arriving traffic in shared memory 110 .
  • the incoming object data are temporarily stored in shared memory 110 , and then transferred to RDE 140 for storage in disk array 141 .
  • TMA 100 also manages the retrieval requests from disk array 141 toward network port 131 . While servicing media playback requests, data is transferred from disk array 141 and stored in buffers in shared memory 110 . The data in the buffers is then transferred out to network controller 165 via ULP accelerator 120 . The data are formed into packets for transmission using TCP/IP, with ULP accelerator 120 performing routine TCP protocol tasks to reduce the load on AP 150 .
  • ULP accelerator 120 might generally offload routine TCP/IP protocol processing from AP 150 .
  • ULP accelerator 120 might perform routine, high frequency calculations and decisions in hardware in real-time, while transferring infrequent, complex calculations and decisions to AP 150 .
  • ULP accelerator 120 might handle communication processing for most packets.
  • ULP accelerator 120 might extract one or more header fields of a received packet and perform a lookup to determine a destination for the received packet.
  • ULP accelerator 120 might also tag a received packet from a previously-established connection with a pre-defined Queue Identifier (QID) used by TMA 100 for traffic queuing.
  • QID Queue Identifier
  • ULP accelerator 120 might route packets received from new or unknown connections to AP 150 for further processing.
  • ULP accelerator 120 provides a received packet to either i) disk array 141 via RDE 140 if the packet contains media content from a previously-established connection, or ii) AP 150 for further processing if the packet contains a control message or the packet is not recognized by ULP accelerator 120 .
  • TMA 100 might temporarily buffer received packets in shared memory 110 .
  • ULP accelerator 120 receives a data transfer request from TMA 100 .
  • the source of data might be disk array 141 (for a media stream), AP 150 (for a control message), or ULP accelerator 120 itself (for a TCP acknowledgement packet).
  • ULP accelerator 120 might encapsulate an Ethernet header (e.g., a TCP header and an IP header) for each outgoing packet and then provide each packet to network interface 165 or USB controller 164 .
  • FIG. 2 shows greater detail of ULP accelerator 120 in NAS system 10 .
  • NAS system 10 includes two separate data paths: a receive data path and a transmit data path.
  • the receive path carries traffic from external devices, for example, via network controller 165 or USB controller 164 , to TMA 100 .
  • the transmit path carries traffic from disk array 141 to external devices, for example, via network controller 165 or USB controller 164 .
  • ULP accelerator 120 receives packets, for example, Ethernet packets from network controller 165 or USB packets from USB controller 164 .
  • packets for example, Ethernet packets from network controller 165 or USB packets from USB controller 164 .
  • the L3 and L4 header fields of each packet are extracted by ULP accelerator 120 .
  • ULP accelerator 120 performs a connection lookup and decides where to send the received packet.
  • An arriving packet from a previously-established connection is tagged with a pre-defined Queue ID (QID) used by TMA 100 for traffic queuing purposes.
  • QID Queue ID
  • a packet from a new or unknown connection might require inspection by AP 150 .
  • ULP accelerator 120 might tag the packet with a special QID and route the packet to AP 150 .
  • the final destination of an arriving packet after ULP accelerator 120 is either disk array 141 for storage (if the packet carries media content), or AP 150 for further processing (if the packet carries a control message or is not recognized by ULP accelerator 120 ).
  • TMA 100 sends the packet to shared memory 110 for temporary buffering.
  • media data might be transferred between a client (not shown) and NAS system 10 in a bulk data transfer that is handled by hardware without processing by AP 150 .
  • a bulk data transfer might be performed such as described in related U.S.
  • ULP accelerator 120 receives a data transfer request from TMA 100 .
  • the source of data to be transferred might be disk array 141 (for a media stream), or ULP accelerator 120 itself (for control data, such as a TCP acknowledgement packet).
  • ULP accelerator 120 encapsulates an Ethernet header, an L3 (IP) header and an L4 (TCP) header for each outgoing packet and then sends the packet to one or more external devices, for example, via network controller 165 or USB controller 164 , based on the destination port specified.
  • IP L3
  • TCP L4
  • AP 150 can insert packets for transmission when necessary; 2) TMA 100 can stream data from disk array 141 ; and 3) ULP accelerator 120 can insert an acknowledge (ACK) packet when a timer expires.
  • ACK acknowledge
  • data is forwarded to ULP accelerator 120 from TMA 100 .
  • SAT 150 generates the data transfer request to ULP accelerator 120 .
  • ULP accelerator 120 processes received network packets in Header Parsing Unit (HPU) 220 , which parses incoming data packets (PDUs), as indicated by signal PARSE_PDU, to determine where the L3 and L4 packet headers start, and delineates the packet boundary between different protocol levels by parsing the packet content.
  • HPU Header Parsing Unit
  • Checksum block 225 performs an L3 and L4 checksum on the incoming data packets to check packet integrity, as indicated by signal CALCULATE_CHECKSUM.
  • Receive Buffer (RX_Buf) 230 buffers incoming packets for use by ULP accelerator 120 , as indicated by signal BUFFER_PDU.
  • TMA 100 is coupled to ULP accelerator 120 , to provide ULP accelerator 120 with an interface to, for example, shared memory 110 , as indicated by signals PDU_ENQUEUE, for placing data packets in a corresponding queue buffer, UPDATE_BP for updating one or more corresponding pointers of the queue buffer, such as a read or write pointer, and PDU_DEQUEUE, for removing data packets from a corresponding queue buffer.
  • PDU_ENQUEUE for placing data packets in a corresponding queue buffer
  • UPDATE_BP for updating one or more corresponding pointers of the queue buffer, such as a read or write pointer
  • PDU_DEQUEUE for removing data packets from a corresponding queue buffer.
  • Connection look-up unit (CLU) 240 is provided with received network data and extracts L3 and L4 fields to form a lookup address, as indicated by signal CONNECTION LOOKUP, and maintains parameters that uniquely identify an established connection, for example a Connection ID (CID) in a connection table for use by AP 150 in locating buffer space in shared memory 110 corresponding to each connection.
  • CLU 240 might use the L3 and L4 fields to form a look-up address for content addressable memory (CAM) 241 .
  • CAM 241 stores parameters that uniquely identify an established connection. An index of matched CAM entries provides a CID for look-up in the connection table.
  • the queue ID (QID) used by TMA 100 to identify a queue buffer might generally be one of the connection parameters maintained by CLU 240 .
  • CAM 241 allows real-time extraction of the QID within the hardware of ULP accelerator 120 , as indicated by signal GET_QID. If an incoming packet does not match an entry in CAM 241 , ULP accelerator 120 provides the packet to AP 150 for further processing.
  • Payload collection unit (PCU) 260 collects traffic from TMA 100 for transmission.
  • Header encapsulation unit (HEU) 280 includes an encapsulation table of template L2, L3 and L4 headers to be added to each outgoing packet.
  • Header Construction Unit (HCU) 270 builds the packet header according to the encapsulation table of HEU 280 .
  • Packet Integration Unit (PIU) 290 assembles a packet by combining packet header data and payload data to form outgoing packets.
  • AP 150 controls the setup of ULP accelerator 120 .
  • Sequence and Acknowledgement Table (SAT) 250 maintains a SAT table to track incoming packet sequence numbers and acknowledgement packets for received and transmitted data packets.
  • the SAT table might be used for TCP/IP connections, or other connection oriented protocols.
  • SAT 250 performs transport layer processing, for example, protocol specific counters for each connection and the remaining object length to be received for each CID.
  • SAT 250 might also offload most TCP operations from AP 150 , for example, updating sequence numbers, setting timers, detecting out-of-sequence packets, recording acknowledgements, etc., as indicated by signals TCP_DATA, LOAD_TCP and ACK_INSERT.
  • ULP accelerator 120 might be implemented such as described in related U.S.
  • TMA 100 manages i) storage of media streams arriving via network port 131 , ii) handling of control traffic for application processing, and iii) playback traffic during retrieval from disk array 141 .
  • TMA 100 controls the flow of all traffic among network controller 165 , USB controller 164 , shared memory 110 , AP 150 , and disk array 141 .
  • TMA 100 manages data storage to and retrieval from disk array 141 by providing the appropriate control information to RDE 140 .
  • Control traffic destined for inspection by AP 150 is also stored in shared memory 110 , and AP 150 can read packets from shared memory 110 .
  • AP 150 also re-orders any packets received out of order.
  • a portion of shared memory 110 and disk array 141 might be employed to store program instructions and data for AP 150 .
  • TMA 100 manages the access to shared memory 110 and disk array 141 by transferring control information from the disk to memory and memory to disk.
  • TMA 100 also enables AP 150 to insert data and extract data to and from an existing packet stream stored in shared memory 110 .
  • TMA 100 is shown in greater detail in FIG. 3 .
  • TMA 100 interfaces to at least five modules/devices: 1) shared memory 110 ; 2) ULP accelerator 120 , which might also interface to a network controller (e.g., 165 ); 3) USB controller 164 ; 4) one or more non-volatile storage devices, for example, disk array 141 ; and 5) AP 150 .
  • Memory controller interface 160 provides the interface for managing accesses to shared memory 110 via a single memory port, such as described in related U.S. patent application Ser. No. 11/273,750, filed Nov. 15, 2005. As shown in FIG.
  • TMA 100 includes memory controller interface 160 , buffer managers 370 , 372 , 374 and 376 that handle memory buffer and disk management, and schedulers 378 , 380 and 382 that allocate the available memory access bandwidth of shared memory 110 .
  • Reassembly buffer/disk manager (RBM) 372 manages the transfer of control packets or packetized media objects from network port 131 to shared memory 110 for reassembly, and then, if appropriate, the transfer of the control packets or packetized media objects to disk array 141 .
  • Media playback buffer/disk manager (PBM) 374 manages the transfer of data out of disk array 141 to shared memory 110 , and then the transfer of data from shared memory 110 to ULP accelerator 120 or USB controller 164 during playback.
  • Application processor memory manager (AMM) 376 provides an interface for AP 150 to disk array 141 and shared memory 110 .
  • Free buffer pool manager (FBM) 370 allocates and de-allocates buffers when needed by the RBM 372 , PBM 374 or AMM 376 , and maintains a free buffer list, where the free buffer list might be stored in a last-in, first-out (LIFO) queue.
  • Memory access scheduler (MAS) 378 , media playback scheduler (MPS) 380 , and disk access scheduler (DAS) 382 manage the shared resources, such as memory access bandwidth and disk access bandwidth.
  • Schedulers 378 , 380 and 382 also provide a prescribed quality of service (QoS), in the form of allocated bandwidth and latency guarantees for media objects during playback.
  • MAS 378 provides RBM 372 , PBM 374 and AMM 376 guaranteed memory access bandwidth.
  • MPS 380 arbitrates among multiple media transfer requests and provides allocated bandwidth and ensures continuous playback without any interruption.
  • DAS 382 provides guaranteed accesses to the disk for the re-assembly process, playback process and
  • MAS 378 manages bandwidth distribution among each media session, while memory controller interface 160 manages all memory accesses via a single memory port of shared memory 110 .
  • MAS 378 and memory controller interface 160 of TMA 100 work together to make efficient and effective use of the memory access resources.
  • MAS 378 might generally provide a prescribed QoS (by pre-allocated time slots and round-robin polling) to a plurality of data transfer requests having different request types.
  • Each of the various types of media streams involves a respectively different set of data transfers to and from shared memory 110 that are under control of MAS 378 .
  • memory write operations include i) re-assembly media write, ii) playback media write, iii) application processor data transfer from disk array 141 to shared memory 110 , and iv) application processor write memory operations.
  • Memory read operations include i) re-assembly read, ii) playback media read, iii) application processor data transfer from shared memory 110 to disk array 141 , and iv) application processor read memory operations.
  • the re-assembly media write process might typically include four steps: 1) receiving data from network port 131 or USB port 130 ; 2) writing the data to shared memory 110 ; 3) reading the data from shared memory 110 ; and 4) writing the data to disk array 141 .
  • the playback media read process might typically include four steps: 1) accessing and receiving data from disk array 141 ; 2) writing the data to shared memory 110 ; 3) reading the data from shared memory 110 ; and 4) sending the data to network port 131 or USB port 130 .
  • the application processor data transfer from memory 110 to disk array 141 might typically include two steps: 1) reading the data from shared memory 110 ; and 2) writing the data to disk array 141 .
  • the application processor data transfer from disk array 141 to shared memory 110 might typically include two steps: 1) reading the data from disk array 141 ; and 2) writing the data to shared memory 110 .
  • AP 150 might write to or read from shared memory 110 directly without writing to or reading from disk array 141 .
  • NAS system 10 receives media objects and control traffic from network port 131 and the objects/traffic are first processed by network controller 165 and ULP accelerator 120 .
  • ULP accelerator 120 transfers the media objects and control traffic to TMA 100 , and TMA 100 stores the arriving traffic in shared memory 110 .
  • the incoming object data is temporarily stored in shared memory 110 , and then transferred to RDE 140 for storage in disk array 141 .
  • TMA 100 also manages retrieval requests from disk array 141 toward network port 131 . While servicing media playback requests, data is transferred from disk array 141 and buffered in shared memory 110 . The data is then transferred out to network port 131 via ULP accelerator 120 , which forms the data into packets for transmission using TCP/IP.
  • TMA 100 manages the storage to and retrieval from disk array 141 by providing the appropriate control information to RDE 140 .
  • TMA 100 might be implemented such as described in related U.S. patent application Ser. No. 11/273,750, filed Nov. 15, 2005.
  • DRM Digital Rights Management
  • Embodiments of the present invention might provide a localized key protection mechanism employing a hardware-based key management engine, and a subsystem for accelerated encryption/decryption of media content.
  • FIG. 4 shows an example of a system in which the keys are managed primarily in hardware, thus prohibiting any outside entity from gaining access to these keys.
  • the exemplary secure key manager 400 includes key memory 410 , key processing engine 404 , and encryption/decryption engine 402 .
  • Key processing engine 404 might be implemented as a direct memory access (DMA) engine such as, for example an ARM PrimeCell PL080 by ARM Holdings, plc of Cambridge, UK, although other implementations might be employed.
  • DMA direct memory access
  • Encryption/Decryption Engine 402 might be implemented as an Advanced Encryption Standard (AES) core, such as a CS5210-40 core by Conexant Systems, Inc., Newport Beach, Calif., although other encryption/decryption engines and other encryption/decryption algorithms might be employed.
  • AES Advanced Encryption Standard
  • key manager 400 might be coupled to an Advanced Microcontroller Bus Architecture (AMBA) Advanced High-performance Bus (AHB), but any suitable type of data bus might be employed.
  • AHB Bus key manager 400 might be in communication with other components of NAS system 10 shown in FIG. 1 , such as AP 150 , Memory Controller 160 , RDE 140 and TMA 100 .
  • FIG. 5 shows an exemplary media server key manager 500 , which might be used for a home media server application.
  • decryption/encryption engine 402 might be implemented as AES core 502 , which operates in accordance with the Advanced Encryption Standard (AES).
  • key processing engine 404 might be implemented as a direct memory access (DMA) processor, shown as DMA processor 504 .
  • DMA direct memory access
  • key processing engine 404 might be any module that moves data efficiently between non-volatile memory 512 and AES Core 502 and key memory 510 without making the data available to AP 150 , such as a function built into TMA 100 .
  • intermediate storage is provided in memory 110 for storing incoming streaming data from network port 131 or while streaming out data from disk array 141 to network port 131 .
  • Control traffic arriving from network port 131 is also managed in memory 110 .
  • Shared memory 110 might include one or more buffer queues (shown as 661 in FIG. 6 ) to manage simultaneous data streams.
  • NAS system 10 might simultaneously receive data from multiple sessions to be i) stored to disk array 141 , ii) played out to devices on a home network (e.g., via network port 131 ), or iii) used for control traffic.
  • Buffer queues 661 are employed to manage the various traffic flows.
  • TMA 100 is employed to manage the traffic and bandwidth of shared memory 110 .
  • Data memory 508 provides intermediate storage, for example, for queuing or buffering encrypted payload data to be decrypted or the decrypted payload data.
  • Non-volatile key memory 512 might be used to store a set of one or more master keys. In some embodiments, to enhance security, non-volatile key memory 512 can only be written once (e.g., key memory 512 is a one-time programmable (OTP) memory). The master keys stored in non-volatile key memory 512 are used to decrypt keys that are stored in external memory (e.g., flash memory 152 ) by the media server manufacturer. The master keys are also programmed to non-volatile key memory 512 during the device manufacturing process.
  • OTP one-time programmable
  • read access to the master keys in non-volatile key memory 512 is limited to DMA Key Processing Engine 504 (to the exclusion of AP 150 ).
  • arbiter 507 might grant access of AHB Bus 520 to either AP 150 or DMA Key Processing Engine 504 at any given time, so that AP 150 cannot access AHB Bus 520 while DMA Processor 504 is reading decrypted keys from one of volatile key memory 510 or the output FIFO 663 ( FIG. 6 ) of AES Core 502 .
  • non-volatile key memory 512 and key memory 510 Due to the cost associated with memories employed by non-volatile key memory 512 and key memory 510 , the amount of on-chip memory space might be limited. By storing encrypted keys in an optional external memory (e.g., flash memory 152 ), the total number of device specific keys that can be stored is extended. The device specific keys are encrypted, and the key (to decrypt the keys stored in flash memory 152 ) is programmed in non-volatile key memory 512 .
  • an optional external memory e.g., flash memory 152
  • AP 150 requests that DMA Processor 504 move a key from either non-volatile key memory 512 or key memory 510 to AES core 502 . Once the key transfer is done, AP 150 inputs the data that are to be decrypted to AES core 502 . Arbiter 507 then grants DMA Processor 504 access to AHB Bus 520 , to the exclusion of AP 150 . AES core 502 decrypts the key data, and the decrypted key is moved by DMA Processor 504 to volatile key memory 510 . Arbiter 507 prevents access by AP 150 to the decrypted key stored in key memory 510 .
  • key memory 510 might be a volatile memory (e.g., random access memory), in which case the decrypted keys are automatically removed from memory when NAS system 10 is powered down.
  • key memory 510 might be an additional non-volatile memory.
  • embodiments of the present invention ensure that the master key is secure in non-volatile key memory 512 and will be accessed in a secure manner in order to decrypt any further keys.
  • DMA Processor 504 might also process the keys by performing pre-determined logical operations (i.e., XOR with another datum, or the like).
  • the operand and the operators are specified by AP 150 , however, at no time does AP 150 have access to any decrypted keys. Instead, AP 150 is provided a pointer to the decrypted key.
  • AP 150 provides the pointer to DMA Processor 504 , which moves the decrypted key from key memory 510 to the AES core 502 .
  • DMA processor 504 includes one or more DMA channels.
  • one of the DMA channels i.e., CH 0
  • CH 0 might be dedicated to handling internal transfers of keys among the AES core 502 , non-volatile key memory 512 and key memory 510 .
  • DMA processor 504 sets access to AES output FIFO 663 (shown in FIG. 6 ).
  • DMA processor 504 sets a signal to a predetermined level (e.g., signal “dma_aes_allow_fifo_read” might be set to a logic low value).
  • a predetermined level e.g., signal “dma_aes_allow_fifo_read” might be set to a logic low value.
  • AES core 502 prevents any read of output FIFO 663 until the signal is set to another logic level (e.g., logic high).
  • AP 150 is prevented from accessing AES output FIFO 663 , which prevents any other process or user from obtaining the decrypted key.
  • arbiter 507 is configured to allow AP 150 to read external flash memory 152 (e.g., via TMA 100 ) and load the encrypted device key in AES Input FIFO 665 (shown in FIG. 6 ), which enables the decryption operation in AES core 502 .
  • AP 150 configures DMA processor 504 to read the decrypted key from AES output FIFO 665 and store it in internal key memory 510 .
  • DMA processor 504 sets a control signal to a predetermined logic level, for example, a control signal “dma_aes_allow_fifo_read” might be set to logic high.
  • DMA processor 504 reads the content of output FIFO 663 and stores it in internal key memory 510 .
  • FIG. 6 is a data flow diagram showing exemplary data flows during a key decryption and data decryption operation. Note that FIG. 6 only shows the subset of modules of FIG. 5 that are involved in the exemplary data flows discussed herein. This does not exclude elements of the system from participating in other data flows for other purposes.
  • one or more packets of data are received (e.g., received from network port 131 , by way of the upper layer protocol (ULP) accelerator 120 , which optionally offloads routine network, transport and application layer protocol processing from AP 150 ), and the received data packets are provided to traffic manager/arbitrator (TMA) 100 .
  • TMA 100 stores the received data packets in intermediate buffer queues 661 in shared memory 110 .
  • the received data packets might be re-assembled and, in some embodiments, translated to accommodate the internal bus width of the NAS system 10 , for example, AHB data bus 172 .
  • shared memory 110 outputs the data to be decrypted from the buffer queues 661 to DMA processor 504 via TMA 100 .
  • DMA processor 504 moves the master key (from non-volatile key memory 512 ) and an encrypted device key (for example from one of flash memory 152 or data memory 508 ) to AES core 502 (e.g., input FIFO 665 ), and AES core 502 decrypts the device key using the master key.
  • AES core 502 e.g., input FIFO 665
  • DMA processor 504 reads the decrypted device key from ABS output FIFO 663 .
  • DMA processor 504 delivers the decrypted device key to internal key memory 510 , where it is stored.
  • DMA processor 504 retrieves the decrypted device key from internal key memory 510 .
  • DMA processor 504 delivers the encrypted packet data to AES core 502 for decryption, along with the decrypted device key. This enables AES core 502 to perform the decryption operation on the encrypted packet data using the decrypted device key.
  • DMA processor 504 reads the decrypted data from AES output FIFO 663 .
  • DMA processor 504 delivers the decrypted data to TMA 100 , which transmits the decrypted data to a buffer queue 661 in shared memory 110 .
  • TMA 100 retrieves the decrypted data from the buffer queue 661 at an appropriate rate for forwarding the data to RDE 140 .
  • TMA 100 delivers the decrypted data to RDE 140 for storage in disk array 141 .
  • FIG. 7 is a flow chart of a method performed by NAS system 10 .
  • AP 150 controls operation of NAS system 10 .
  • AP 150 might control DMA processor 504 .
  • AP 150 retrieves an encrypted second key (the device key) from one of flash memory 152 or shared memory 110 , in which the device key is stored.
  • AP 150 delivers the encrypted second key to AES core 502 .
  • DMA processor 504 moves a first key (the master key) from non-volatile memory 512 to AES core 502 , for example by using direct memory access (DMA), while preventing AP 150 from accessing the first key.
  • DMA direct memory access
  • AES core 502 uses the first key to decrypt the encrypted second key.
  • DMA processor 504 moves the second key to key memory 510 from AES core 502 , while preventing AP 150 from accessing the decrypted second key.
  • DMA processor 504 moves the second key from key memory 510 to AES core 502 , while preventing AP 150 from accessing the decrypted second key.
  • AP 150 delivers the encrypted packet data to AES core 502 for decryption.
  • AES core 502 decrypts the encrypted packet data using the second key.
  • the decrypted device key might be delivered by DMA processor 504 to the input of AES core 502 for decrypting an additional key, the additional key in turn used to decrypt the encrypted payload data.
  • the decrypted device key is stored in the key memory 510
  • the decrypted device key is re-encrypted with a different key (e.g., another master key stored in non-volatile key memory 512 ) by AES core 502 before ABS core 502 stores the key in key memory 510 .
  • a different key e.g., another master key stored in non-volatile key memory 512
  • ABS core 502 stores the key in key memory 510 .
  • the examples described above include an encryption/decryption engine 402 that acts as the decryption engine, for the purpose of performing the decryption operations described above, a standalone decryption engine that provides the decryption functions might alternatively be used.
  • Described embodiments provide efficient data movement for encryption/decryption, and efficient key protection including hardware for decryption and storage of decrypted device keys.
  • the optional inclusion of non-volatile memory 512 and key memory 510 allows a designer to extend the number of keys supported. Thus, the number of keys supported is variable.
  • Described embodiments provide a multi-level key management and processing engine that supports a master key to unlock device specific keys on a chip.
  • the master keys might typically be programmed by the manufacturer of the device at the time of production, so that each vendor can select one or more master keys.
  • Hardware acceleration of key management, encryption and decryption with minimal control processor intervention might provide improved performance while also providing the ability to hide the keys from the control processor (AP 150 ) to avoid hackers from modifying the boot up code to access any protected keys.

Abstract

Described embodiments provide a server for transferring data packets of streaming data sessions between devices. The server includes an accelerator that, for received data packets, i) extracts header fields of the packets, ii) determines, based on the header fields, a destination for the packets, and iii) provides the packets to the destination. For data to be transmitted, the accelerator i) groups the data into packets, ii) generates header fields for the packets, and iii) provides the packets to the network. A memory arbiter manages accesses to memory that buffers data and stores keys corresponding to the data sessions. A storage medium stores media files corresponding to the data sessions. A key manager includes i) a first memory for storing a master key of the server, ii) a second memory for storing one or more keys corresponding to the data sessions, and iii) a processor to encrypt and decrypt data.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation in part of U.S. patent application Ser. No. 11/226,507, filed Sep. 13, 2005 now U.S. Pat. No. 7,599,364, and is a continuation in part of U.S. patent application Ser. No. 11/273,750, filed Nov. 15, 2005 now U.S. Pat. No. 7,461,214, and is a continuation in part of U.S. patent application Ser. No. 11/364,979, filed Feb. 28, 2006, and is a continuation in Part of U.S. patent application Ser. No. 11/384,975, filed Mar. 20, 2006, and claims the benefit of U.S. provisional patent application Nos. 60/724,692, filed Oct. 7, 2005, 60/724,464, filed Oct. 7, 2005, 60/724,462, filed Oct. 7, 2005, 60/724,463, filed Oct. 7, 2005, 60/724,722, filed Oct. 7, 2005, 60/725,060, filed Oct. 7, 2005, and 60/724,573, filed Oct. 7, 2005, all of which applications are expressly incorporated by reference herein in their entireties.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to security mechanisms for network attached media streaming systems.
2. Description of Related Art
Current and emerging digital rights management (DRM) solutions include multi-level key management solutions. Keys used for Encryption/Decryption are derived from various intermediate keys to ultimately determine a title key for a media file. As an example, a master key will unlock a device key and, using the device key, a media key is unlocked. Using this media key, a title key is discovered. In this process it is important that the decrypted keys are not exposed to users or processes outside the device to be used by a hacker.
Often, conventional approaches used a completely software-based approach in which the decryption keys were protected by software. Other approaches employed hardware assisted methods which exposed the keys. Exposed keys might provide backdoor access for a hacker, allowing the keys to become compromised.
SUMMARY OF THE INVENTION
Embodiments of the present invention provide a server for transferring data packets of streaming data sessions between playback devices. The server includes a protocol accelerator that, for received data packets, i) extracts header fields of the packets, ii) determines, based on the header fields, a destination for the packets, and iii) provides the packets to the destination. For data to be transmitted, the protocol accelerator i) groups the data into packets, ii) generates header fields for the packets, and iii) provides the packets to the network. A control processor processes data. A memory arbiter manages accesses to shared memory that buffers data and stores keys corresponding to the data sessions. A storage medium stores media files corresponding to the data sessions. A key manager includes i) a first memory for storing a master key of the server, ii) a second memory for storing one or more keys corresponding to the data sessions, and iii) a processor to encrypt and decrypt data.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of an exemplary network attached server (NAS) system for streaming media in accordance to embodiments of the present invention;
FIG. 2 is a block diagram of an exemplary ULP accelerator of the system shown in FIG. 1;
FIG. 3 is a block diagram of an exemplary TMA module of the system shown in FIG. 1;
FIG. 4 is a block diagram of a secure key management system in accordance with embodiments of the present invention;
FIG. 5 is a block diagram of an exemplary home network attached storage (NAS) server including the secure key management system of FIG. 4;
FIG. 6 is a data flow diagram showing exemplary data flows during a key decryption and data decryption operation in accordance with embodiments of the present invention; and
FIG. 7 is a flow chart showing a method of decrypting data in accordance with exemplary embodiments of the present invention.
DETAILED DESCRIPTION
This description of the exemplary embodiments is intended to be read in connection with the accompanying drawings, which are to be considered part of the entire written description.
FIG. 1 is a block diagram of an exemplary home media server and network attached storage (NAS) system 10 for a home media server application, which might be implemented as a system on a chip (SOC). NAS system 10 is connected to input sources, such as via USB port 130 or network port 131, and one or more mass storage devices, such as a hard disk drive (HDD) array 141. In NAS system 10, data from multiple sessions are concurrently stored to disk array 141, or played out to devices (e.g., PCs, TVs, digital video recorders (DVRs), personal video recorders (PVRs), and the like, not shown) on a home network via USB port 130 or network port 131. USB port 130 and network port 131 might also be used for control traffic. The term “session” broadly encompasses any open connection that has activity. For example, a receive session is a connection in which data is being received from a media device, reassembled and stored in disk array 141 (or other mass storage device), and a transmit session is a connection in which data is being read out from disk array 141 to a media device (e.g., TV, stereo, computer or the like) for playback. A control session is a connection in which data is transferred between a network and application processor (AP) 150 for processor functions that operate NAS system 10 (e.g., retrieving data or instructions from shared memory 110, reading from or writing to registers). The sessions use a shared memory 110 as an intermediate storage medium.
AP 150 might be an embedded ARM926EJ-S core by ARM Holdings, plc, Cambridge, UK, or any other embedded microprocessor. In FIG. 1, AP 150 is coupled to other elements of the system by at least one of two different buses: instruction bus 174 and data bus 172. In some embodiments, both instruction and data buses 174 and 172 are AMBA AHB buses. AP 150 is coupled to Traffic Manger Arbitrator (TMA) 100 and flash memory 152 via instruction bus 174 and data bus 172. TMA 100 includes an exemplary memory controller interface 160. TMA 100 manages i) storage of media streams arriving via network port 131, ii) handling of control traffic for application processing, and iii) playback traffic during retrieval from HDD array 141. TMA 100 controls the flow of all traffic among the network controller 165, USB controller 164, AP 150, HDD array 141, and shared memory 110.
In some embodiments, shared memory 110 is implemented by a single-port DDR-2 DRAM. Double Data Rate (DDR) synchronous dynamic random access memory (SDRAM) is a high-bandwidth DRAM technology. Other types of memory might be used to implement shared memory 110. In some embodiments, disk array 141 is implemented as a 4-channel Serial Advanced Technology Attachment (SATA) hard disk array, although other types of storage devices, such as Parallel Advanced Technology Attachment (PATA) hard disks, optical disks, or the like might be employed.
AP 150 is also coupled, via a data bus 172, to Gigabit Ethernet media access control (GbE MAC) network controller 165, Upper Layer Protocol (ULP) accelerator 120, RAID decoder/encoder (RDE) module 140 (where RAID denotes redundant array of inexpensive disks), USB controller 164 and multi drive controller (MDC) 142.
AP 150 accesses shared memory 110 for several reasons. Part of shared memory 110 might generally contain program instructions and data for AP 150. AHB Instruction Bus 174 might access shared memory 110 to get instruction/program data on behalf of AP 150. Also, the control traffic destined for AP 150 inspection is stored in shared memory 110. In some embodiments, AHB instruction bus 174 has read access to shared memory 110, but the AHB data bus 172 is provided both read and write access to memory 110. AP 150 uses the write access to AHB data bus 172 to re-order data packets (e.g., TCP packets) received out-of-order. Also, AP 150 might insert data in and extract data from an existing packet stream in the shared memory 110.
AHB data bus 172 and AHB instruction bus 174 access shared memory 110 on behalf of AP 150 frequently. AHB data bus 172 is primarily used to access the internal register space and to access the data portion of the external shared memory. AHB instruction bus 174 is used to access instructions specific to AP 150, that are stored in shared memory 110. NAS system 10 receives media objects and control traffic from network port 131 and the objects/traffic are first processed by the local area network controller (e.g., Gigabit Ethernet controller GbE MAC 165) and ULP accelerator 120. ULP accelerator 120 transfers the media objects and control traffic to TMA 100, and TMA 100 stores the arriving traffic in shared memory 110. In the case of media object transfers, the incoming object data are temporarily stored in shared memory 110, and then transferred to RDE 140 for storage in disk array 141. TMA 100 also manages the retrieval requests from disk array 141 toward network port 131. While servicing media playback requests, data is transferred from disk array 141 and stored in buffers in shared memory 110. The data in the buffers is then transferred out to network controller 165 via ULP accelerator 120. The data are formed into packets for transmission using TCP/IP, with ULP accelerator 120 performing routine TCP protocol tasks to reduce the load on AP 150.
ULP accelerator 120 might generally offload routine TCP/IP protocol processing from AP 150. For example, ULP accelerator 120 might perform routine, high frequency calculations and decisions in hardware in real-time, while transferring infrequent, complex calculations and decisions to AP 150. ULP accelerator 120 might handle communication processing for most packets. For received packets, ULP accelerator 120 might extract one or more header fields of a received packet and perform a lookup to determine a destination for the received packet. ULP accelerator 120 might also tag a received packet from a previously-established connection with a pre-defined Queue Identifier (QID) used by TMA 100 for traffic queuing. ULP accelerator 120 might route packets received from new or unknown connections to AP 150 for further processing. Thus, ULP accelerator 120 provides a received packet to either i) disk array 141 via RDE 140 if the packet contains media content from a previously-established connection, or ii) AP 150 for further processing if the packet contains a control message or the packet is not recognized by ULP accelerator 120. In either case, TMA 100 might temporarily buffer received packets in shared memory 110.
For transmitted packets, ULP accelerator 120 receives a data transfer request from TMA 100. The source of data might be disk array 141 (for a media stream), AP 150 (for a control message), or ULP accelerator 120 itself (for a TCP acknowledgement packet). Regardless of the packet source, ULP accelerator 120 might encapsulate an Ethernet header (e.g., a TCP header and an IP header) for each outgoing packet and then provide each packet to network interface 165 or USB controller 164.
FIG. 2 shows greater detail of ULP accelerator 120 in NAS system 10. As shown in FIG. 2, NAS system 10 includes two separate data paths: a receive data path and a transmit data path. The receive path carries traffic from external devices, for example, via network controller 165 or USB controller 164, to TMA 100. The transmit path carries traffic from disk array 141 to external devices, for example, via network controller 165 or USB controller 164.
In the receive data path, ULP accelerator 120 receives packets, for example, Ethernet packets from network controller 165 or USB packets from USB controller 164. The L3 and L4 header fields of each packet are extracted by ULP accelerator 120. ULP accelerator 120 performs a connection lookup and decides where to send the received packet. An arriving packet from a previously-established connection is tagged with a pre-defined Queue ID (QID) used by TMA 100 for traffic queuing purposes.
A packet from a new or unknown connection might require inspection by AP 150. ULP accelerator 120 might tag the packet with a special QID and route the packet to AP 150. The final destination of an arriving packet after ULP accelerator 120 is either disk array 141 for storage (if the packet carries media content), or AP 150 for further processing (if the packet carries a control message or is not recognized by ULP accelerator 120). In either ease, TMA 100 sends the packet to shared memory 110 for temporary buffering. To maintain streaming bandwidth, media data might be transferred between a client (not shown) and NAS system 10 in a bulk data transfer that is handled by hardware without processing by AP 150. In embodiments of the present invention, a bulk data transfer might be performed such as described in related U.S. patent application Ser. No. 11/364,979, filed Feb. 28, 2006.
In the transmit data path, ULP accelerator 120 receives a data transfer request from TMA 100. The source of data to be transferred might be disk array 141 (for a media stream), or ULP accelerator 120 itself (for control data, such as a TCP acknowledgement packet). Regardless of the traffic source, ULP accelerator 120 encapsulates an Ethernet header, an L3 (IP) header and an L4 (TCP) header for each outgoing packet and then sends the packet to one or more external devices, for example, via network controller 165 or USB controller 164, based on the destination port specified. In general, there are three sources for initiating data transmissions: 1) AP 150 can insert packets for transmission when necessary; 2) TMA 100 can stream data from disk array 141; and 3) ULP accelerator 120 can insert an acknowledge (ACK) packet when a timer expires. In the first two cases, data is forwarded to ULP accelerator 120 from TMA 100. In the third case, SAT 150 generates the data transfer request to ULP accelerator 120.
As shown in FIG. 2, ULP accelerator 120 processes received network packets in Header Parsing Unit (HPU) 220, which parses incoming data packets (PDUs), as indicated by signal PARSE_PDU, to determine where the L3 and L4 packet headers start, and delineates the packet boundary between different protocol levels by parsing the packet content. Checksum block 225 performs an L3 and L4 checksum on the incoming data packets to check packet integrity, as indicated by signal CALCULATE_CHECKSUM. Receive Buffer (RX_Buf) 230 buffers incoming packets for use by ULP accelerator 120, as indicated by signal BUFFER_PDU. TMA 100 is coupled to ULP accelerator 120, to provide ULP accelerator 120 with an interface to, for example, shared memory 110, as indicated by signals PDU_ENQUEUE, for placing data packets in a corresponding queue buffer, UPDATE_BP for updating one or more corresponding pointers of the queue buffer, such as a read or write pointer, and PDU_DEQUEUE, for removing data packets from a corresponding queue buffer.
Connection look-up unit (CLU) 240 is provided with received network data and extracts L3 and L4 fields to form a lookup address, as indicated by signal CONNECTION LOOKUP, and maintains parameters that uniquely identify an established connection, for example a Connection ID (CID) in a connection table for use by AP 150 in locating buffer space in shared memory 110 corresponding to each connection. CLU 240 might use the L3 and L4 fields to form a look-up address for content addressable memory (CAM) 241. CAM 241 stores parameters that uniquely identify an established connection. An index of matched CAM entries provides a CID for look-up in the connection table. The queue ID (QID) used by TMA 100 to identify a queue buffer might generally be one of the connection parameters maintained by CLU 240. CAM 241 allows real-time extraction of the QID within the hardware of ULP accelerator 120, as indicated by signal GET_QID. If an incoming packet does not match an entry in CAM 241, ULP accelerator 120 provides the packet to AP 150 for further processing.
Payload collection unit (PCU) 260 collects traffic from TMA 100 for transmission. Header encapsulation unit (HEU) 280 includes an encapsulation table of template L2, L3 and L4 headers to be added to each outgoing packet. Header Construction Unit (HCU) 270 builds the packet header according to the encapsulation table of HEU 280. Packet Integration Unit (PIU) 290 assembles a packet by combining packet header data and payload data to form outgoing packets. AP 150 controls the setup of ULP accelerator 120.
Sequence and Acknowledgement Table (SAT) 250 maintains a SAT table to track incoming packet sequence numbers and acknowledgement packets for received and transmitted data packets. The SAT table might be used for TCP/IP connections, or other connection oriented protocols. SAT 250 performs transport layer processing, for example, protocol specific counters for each connection and the remaining object length to be received for each CID. In general, SAT 250 might also offload most TCP operations from AP 150, for example, updating sequence numbers, setting timers, detecting out-of-sequence packets, recording acknowledgements, etc., as indicated by signals TCP_DATA, LOAD_TCP and ACK_INSERT. In embodiments of the present invention, ULP accelerator 120 might be implemented such as described in related U.S. patent application Ser. Nos. 11/226,507, filed Sep. 13, 2005 and 11/384,975, filed Mar. 20, 2006.
TMA 100 manages i) storage of media streams arriving via network port 131, ii) handling of control traffic for application processing, and iii) playback traffic during retrieval from disk array 141. TMA 100 controls the flow of all traffic among network controller 165, USB controller 164, shared memory 110, AP 150, and disk array 141. TMA 100 manages data storage to and retrieval from disk array 141 by providing the appropriate control information to RDE 140. Control traffic destined for inspection by AP 150 is also stored in shared memory 110, and AP 150 can read packets from shared memory 110. AP 150 also re-orders any packets received out of order. A portion of shared memory 110 and disk array 141 might be employed to store program instructions and data for AP 150. TMA 100 manages the access to shared memory 110 and disk array 141 by transferring control information from the disk to memory and memory to disk. TMA 100 also enables AP 150 to insert data and extract data to and from an existing packet stream stored in shared memory 110.
TMA 100 is shown in greater detail in FIG. 3. TMA 100 interfaces to at least five modules/devices: 1) shared memory 110; 2) ULP accelerator 120, which might also interface to a network controller (e.g., 165); 3) USB controller 164; 4) one or more non-volatile storage devices, for example, disk array 141; and 5) AP 150. Memory controller interface 160 provides the interface for managing accesses to shared memory 110 via a single memory port, such as described in related U.S. patent application Ser. No. 11/273,750, filed Nov. 15, 2005. As shown in FIG. 3, TMA 100 includes memory controller interface 160, buffer managers 370, 372, 374 and 376 that handle memory buffer and disk management, and schedulers 378, 380 and 382 that allocate the available memory access bandwidth of shared memory 110. Reassembly buffer/disk manager (RBM) 372 manages the transfer of control packets or packetized media objects from network port 131 to shared memory 110 for reassembly, and then, if appropriate, the transfer of the control packets or packetized media objects to disk array 141. Media playback buffer/disk manager (PBM) 374 manages the transfer of data out of disk array 141 to shared memory 110, and then the transfer of data from shared memory 110 to ULP accelerator 120 or USB controller 164 during playback. Application processor memory manager (AMM) 376 provides an interface for AP 150 to disk array 141 and shared memory 110.
Free buffer pool manager (FBM) 370 allocates and de-allocates buffers when needed by the RBM 372, PBM 374 or AMM 376, and maintains a free buffer list, where the free buffer list might be stored in a last-in, first-out (LIFO) queue. Memory access scheduler (MAS) 378, media playback scheduler (MPS) 380, and disk access scheduler (DAS) 382 manage the shared resources, such as memory access bandwidth and disk access bandwidth. Schedulers 378, 380 and 382 also provide a prescribed quality of service (QoS), in the form of allocated bandwidth and latency guarantees for media objects during playback. MAS 378 provides RBM 372, PBM 374 and AMM 376 guaranteed memory access bandwidth. MPS 380 arbitrates among multiple media transfer requests and provides allocated bandwidth and ensures continuous playback without any interruption. DAS 382 provides guaranteed accesses to the disk for the re-assembly process, playback process and access by AP 150.
MAS 378 manages bandwidth distribution among each media session, while memory controller interface 160 manages all memory accesses via a single memory port of shared memory 110. MAS 378 and memory controller interface 160 of TMA 100 work together to make efficient and effective use of the memory access resources. MAS 378 might generally provide a prescribed QoS (by pre-allocated time slots and round-robin polling) to a plurality of data transfer requests having different request types. Each of the various types of media streams involves a respectively different set of data transfers to and from shared memory 110 that are under control of MAS 378. For example, memory write operations include i) re-assembly media write, ii) playback media write, iii) application processor data transfer from disk array 141 to shared memory 110, and iv) application processor write memory operations. Memory read operations include i) re-assembly read, ii) playback media read, iii) application processor data transfer from shared memory 110 to disk array 141, and iv) application processor read memory operations.
The re-assembly media write process might typically include four steps: 1) receiving data from network port 131 or USB port 130; 2) writing the data to shared memory 110; 3) reading the data from shared memory 110; and 4) writing the data to disk array 141. The playback media read process might typically include four steps: 1) accessing and receiving data from disk array 141; 2) writing the data to shared memory 110; 3) reading the data from shared memory 110; and 4) sending the data to network port 131 or USB port 130.
The application processor data transfer from memory 110 to disk array 141 might typically include two steps: 1) reading the data from shared memory 110; and 2) writing the data to disk array 141. Similarly, the application processor data transfer from disk array 141 to shared memory 110 might typically include two steps: 1) reading the data from disk array 141; and 2) writing the data to shared memory 110. Further, AP 150 might write to or read from shared memory 110 directly without writing to or reading from disk array 141.
Thus, as described herein, NAS system 10 receives media objects and control traffic from network port 131 and the objects/traffic are first processed by network controller 165 and ULP accelerator 120. ULP accelerator 120 transfers the media objects and control traffic to TMA 100, and TMA 100 stores the arriving traffic in shared memory 110. In the case of media object transfers, the incoming object data is temporarily stored in shared memory 110, and then transferred to RDE 140 for storage in disk array 141. TMA 100 also manages retrieval requests from disk array 141 toward network port 131. While servicing media playback requests, data is transferred from disk array 141 and buffered in shared memory 110. The data is then transferred out to network port 131 via ULP accelerator 120, which forms the data into packets for transmission using TCP/IP. TMA 100 manages the storage to and retrieval from disk array 141 by providing the appropriate control information to RDE 140. In embodiments of the present invention, TMA 100 might be implemented such as described in related U.S. patent application Ser. No. 11/273,750, filed Nov. 15, 2005.
Digital Rights Management (“DRM”) solutions typically employ secure key processing to decrypt media files played on home media players to prevent the overall digital rights management from being compromised. Embodiments of the present invention might provide a localized key protection mechanism employing a hardware-based key management engine, and a subsystem for accelerated encryption/decryption of media content.
FIG. 4 shows an example of a system in which the keys are managed primarily in hardware, thus prohibiting any outside entity from gaining access to these keys. The exemplary secure key manager 400 includes key memory 410, key processing engine 404, and encryption/decryption engine 402. Key processing engine 404 might be implemented as a direct memory access (DMA) engine such as, for example an ARM PrimeCell PL080 by ARM Holdings, plc of Cambridge, UK, although other implementations might be employed. Encryption/Decryption Engine 402 might be implemented as an Advanced Encryption Standard (AES) core, such as a CS5210-40 core by Conexant Systems, Inc., Newport Beach, Calif., although other encryption/decryption engines and other encryption/decryption algorithms might be employed. As shown in FIG. 4, key manager 400 might be coupled to an Advanced Microcontroller Bus Architecture (AMBA) Advanced High-performance Bus (AHB), but any suitable type of data bus might be employed. Via the AHB Bus, key manager 400 might be in communication with other components of NAS system 10 shown in FIG. 1, such as AP 150, Memory Controller 160, RDE 140 and TMA 100.
FIG. 5 shows an exemplary media server key manager 500, which might be used for a home media server application. As shown in FIG. 5, decryption/encryption engine 402 might be implemented as AES core 502, which operates in accordance with the Advanced Encryption Standard (AES). Also as shown in FIG. 5, key processing engine 404 might be implemented as a direct memory access (DMA) processor, shown as DMA processor 504. In other embodiments, key processing engine 404 might be any module that moves data efficiently between non-volatile memory 512 and AES Core 502 and key memory 510 without making the data available to AP 150, such as a function built into TMA 100.
As described herein, intermediate storage is provided in memory 110 for storing incoming streaming data from network port 131 or while streaming out data from disk array 141 to network port 131. Control traffic arriving from network port 131 is also managed in memory 110. Shared memory 110 might include one or more buffer queues (shown as 661 in FIG. 6) to manage simultaneous data streams.
As described herein, NAS system 10 might simultaneously receive data from multiple sessions to be i) stored to disk array 141, ii) played out to devices on a home network (e.g., via network port 131), or iii) used for control traffic. Buffer queues 661 are employed to manage the various traffic flows. TMA 100 is employed to manage the traffic and bandwidth of shared memory 110. Data memory 508 provides intermediate storage, for example, for queuing or buffering encrypted payload data to be decrypted or the decrypted payload data.
Non-volatile key memory 512 might be used to store a set of one or more master keys. In some embodiments, to enhance security, non-volatile key memory 512 can only be written once (e.g., key memory 512 is a one-time programmable (OTP) memory). The master keys stored in non-volatile key memory 512 are used to decrypt keys that are stored in external memory (e.g., flash memory 152) by the media server manufacturer. The master keys are also programmed to non-volatile key memory 512 during the device manufacturing process.
In some embodiments, read access to the master keys in non-volatile key memory 512 is limited to DMA Key Processing Engine 504 (to the exclusion of AP 150). For example, as shown in FIG. 5, arbiter 507 might grant access of AHB Bus 520 to either AP 150 or DMA Key Processing Engine 504 at any given time, so that AP 150 cannot access AHB Bus 520 while DMA Processor 504 is reading decrypted keys from one of volatile key memory 510 or the output FIFO 663 (FIG. 6) of AES Core 502.
Due to the cost associated with memories employed by non-volatile key memory 512 and key memory 510, the amount of on-chip memory space might be limited. By storing encrypted keys in an optional external memory (e.g., flash memory 152), the total number of device specific keys that can be stored is extended. The device specific keys are encrypted, and the key (to decrypt the keys stored in flash memory 152) is programmed in non-volatile key memory 512.
When a decryption operation requiring a key is to be performed, AP 150 requests that DMA Processor 504 move a key from either non-volatile key memory 512 or key memory 510 to AES core 502. Once the key transfer is done, AP 150 inputs the data that are to be decrypted to AES core 502. Arbiter 507 then grants DMA Processor 504 access to AHB Bus 520, to the exclusion of AP 150. AES core 502 decrypts the key data, and the decrypted key is moved by DMA Processor 504 to volatile key memory 510. Arbiter 507 prevents access by AP 150 to the decrypted key stored in key memory 510.
In some embodiments, such as shown in FIG. 5, key memory 510 might be a volatile memory (e.g., random access memory), in which case the decrypted keys are automatically removed from memory when NAS system 10 is powered down. In other embodiments, key memory 510 might be an additional non-volatile memory. Thus, as described with regard to FIG. 5, embodiments of the present invention ensure that the master key is secure in non-volatile key memory 512 and will be accessed in a secure manner in order to decrypt any further keys.
DMA Processor 504 might also process the keys by performing pre-determined logical operations (i.e., XOR with another datum, or the like). The operand and the operators are specified by AP 150, however, at no time does AP 150 have access to any decrypted keys. Instead, AP 150 is provided a pointer to the decrypted key. When the decrypted key is to be used for decryption, AP 150 provides the pointer to DMA Processor 504, which moves the decrypted key from key memory 510 to the AES core 502.
In some embodiments, DMA processor 504 includes one or more DMA channels. For example, one of the DMA channels (i.e., CH0) might be dedicated to handling internal transfers of keys among the AES core 502, non-volatile key memory 512 and key memory 510. When an encrypted key stored in external memory, such as flash memory 152 is to be decrypted, AP 150 configures DMA CH0 with the following parameters: i) Source Addr=the address of the device key in non-volatile key memory 512, and ii) Dest Address=the address of key memory 510. When the DMA channel is thus programmed, DMA processor 504 sets access to AES output FIFO 663 (shown in FIG. 6). For example, DMA processor 504 sets a signal to a predetermined level (e.g., signal “dma_aes_allow_fifo_read” might be set to a logic low value). When this signal is set to the predetermined level (e.g., logic low), AES core 502 prevents any read of output FIFO 663 until the signal is set to another logic level (e.g., logic high). Thus, AP 150 is prevented from accessing AES output FIFO 663, which prevents any other process or user from obtaining the decrypted key.
Once DMA processor 504 completes the transfer of the master key to AES core 502, arbiter 507 is configured to allow AP 150 to read external flash memory 152 (e.g., via TMA 100) and load the encrypted device key in AES Input FIFO 665 (shown in FIG. 6), which enables the decryption operation in AES core 502. When AES core 502 completes the operation, AP 150 configures DMA processor 504 to read the decrypted key from AES output FIFO 665 and store it in internal key memory 510. For example, to store the decrypted key in key memory 510 when DMA processor 504 is the master of AHB bus 520, as enabled by arbiter 507, DMA processor 504 sets a control signal to a predetermined logic level, for example, a control signal “dma_aes_allow_fifo_read” might be set to logic high. DMA processor 504 reads the content of output FIFO 663 and stores it in internal key memory 510.
FIG. 6 is a data flow diagram showing exemplary data flows during a key decryption and data decryption operation. Note that FIG. 6 only shows the subset of modules of FIG. 5 that are involved in the exemplary data flows discussed herein. This does not exclude elements of the system from participating in other data flows for other purposes.
As shown in FIG. 6, in data flow 601, one or more packets of data are received (e.g., received from network port 131, by way of the upper layer protocol (ULP) accelerator 120, which optionally offloads routine network, transport and application layer protocol processing from AP 150), and the received data packets are provided to traffic manager/arbitrator (TMA) 100. In data flow 602, TMA 100 stores the received data packets in intermediate buffer queues 661 in shared memory 110. The received data packets might be re-assembled and, in some embodiments, translated to accommodate the internal bus width of the NAS system 10, for example, AHB data bus 172.
In data flow 603, shared memory 110 outputs the data to be decrypted from the buffer queues 661 to DMA processor 504 via TMA 100. In data flow 604, DMA processor 504 moves the master key (from non-volatile key memory 512) and an encrypted device key (for example from one of flash memory 152 or data memory 508) to AES core 502 (e.g., input FIFO 665), and AES core 502 decrypts the device key using the master key. In data flow 605, once the device key is decrypted, DMA processor 504 reads the decrypted device key from ABS output FIFO 663.
In data flow 606, DMA processor 504 delivers the decrypted device key to internal key memory 510, where it is stored. In data flow 607, DMA processor 504 retrieves the decrypted device key from internal key memory 510. In data flow 608, DMA processor 504 delivers the encrypted packet data to AES core 502 for decryption, along with the decrypted device key. This enables AES core 502 to perform the decryption operation on the encrypted packet data using the decrypted device key.
In data flow 609, DMA processor 504 reads the decrypted data from AES output FIFO 663. In data flow 610, DMA processor 504 delivers the decrypted data to TMA 100, which transmits the decrypted data to a buffer queue 661 in shared memory 110. In data flow 611, TMA 100 retrieves the decrypted data from the buffer queue 661 at an appropriate rate for forwarding the data to RDE 140. In data flow 612, TMA 100 delivers the decrypted data to RDE 140 for storage in disk array 141.
FIG. 7 is a flow chart of a method performed by NAS system 10. As shown in FIG. 7, at step 700 AP 150 controls operation of NAS system 10. For example, AP 150 might control DMA processor 504. At step 702, AP 150 retrieves an encrypted second key (the device key) from one of flash memory 152 or shared memory 110, in which the device key is stored.
At step 704, AP 150 delivers the encrypted second key to AES core 502. At step 706, DMA processor 504 moves a first key (the master key) from non-volatile memory 512 to AES core 502, for example by using direct memory access (DMA), while preventing AP 150 from accessing the first key. At step 708, AES core 502 uses the first key to decrypt the encrypted second key.
At step 710, DMA processor 504 moves the second key to key memory 510 from AES core 502, while preventing AP 150 from accessing the decrypted second key. At step 712, DMA processor 504 moves the second key from key memory 510 to AES core 502, while preventing AP 150 from accessing the decrypted second key. At step 714, AP 150 delivers the encrypted packet data to AES core 502 for decryption. At step 716, AES core 502 decrypts the encrypted packet data using the second key.
One of ordinary skill in the art would understand that the exemplary system and data flows described above can be extended to multiple levels of keys. The decrypted device key might be delivered by DMA processor 504 to the input of AES core 502 for decrypting an additional key, the additional key in turn used to decrypt the encrypted payload data.
Although an example is described above in which the decrypted device key is stored in the key memory 510, in other embodiments, the decrypted device key is re-encrypted with a different key (e.g., another master key stored in non-volatile key memory 512) by AES core 502 before ABS core 502 stores the key in key memory 510. Although the examples described above include an encryption/decryption engine 402 that acts as the decryption engine, for the purpose of performing the decryption operations described above, a standalone decryption engine that provides the decryption functions might alternatively be used.
Described embodiments provide efficient data movement for encryption/decryption, and efficient key protection including hardware for decryption and storage of decrypted device keys. The optional inclusion of non-volatile memory 512 and key memory 510 allows a designer to extend the number of keys supported. Thus, the number of keys supported is variable.
Described embodiments provide a multi-level key management and processing engine that supports a master key to unlock device specific keys on a chip. The master keys might typically be programmed by the manufacturer of the device at the time of production, so that each vendor can select one or more master keys. Hardware acceleration of key management, encryption and decryption with minimal control processor intervention might provide improved performance while also providing the ability to hide the keys from the control processor (AP 150) to avoid hackers from modifying the boot up code to access any protected keys.
Although the invention has been described in terms of exemplary embodiments, it is not limited thereto. Rather, the invention should be construed broadly, to include other variants and embodiments of the invention, which might be made by those skilled in the art without departing from the scope and range of equivalents of the invention.

Claims (15)

1. A server system for transmitting and receiving data packets corresponding to one or more streaming data sessions between one or more playback devices over at least one network connection, the server system comprising:
a protocol accelerator adapted to, for received data packets corresponding to the one or more data sessions, (i) extract one or more header fields of the received data packets, (ii) perform a lookup, based on the extracted one or more header fields, to determine a destination for the received data packets, and (iii) provide the received data packets to the destination, and for data to be transmitted, (i) group the data to be transmitted into data packets, [ii) generate one or more header fields for the data packets, and (iii) provide the data packets to the at least one network connection;
a control processor adapted to perform processing on (i) received data packets and (ii) data to be transmitted;
a memory arbiter adapted to manage accesses to a shared memory, wherein the shared memory is adapted to (i) buffer received data packets and data to be transmitted, and (ii) store one or more keys corresponding to the one or more data sessions;
a storage medium adapted to store media files corresponding to the one or more data sessions;
a key manager comprising: (i) a first memory for storing at least one master key of the server, (ii) a second memory for storing one or more keys corresponding to the one or more data sessions, and (iii) an encryption/decryption processor adapted to encrypt and decrypt data packets, (iv) a direct memory access (DMA) processor, and (v) a bus arbiter adapted to exclusively couple a bus of the key manager to one of:
(a.) the control processor, and (b) the DMA processor, wherein the DMA processor is adapted to (1) transfer the one or more keys between the encryption/decryption processor and the second memory and (2) provide a signal to the encryption/decryption processor when data is present for the encryption/decryption processor to decrypt,
wherein file encryption/decryption processor is further adapted to: (i) responsive to the signal, perform the decryption, (ii) provide a signal to the bus arbiter, and (iii) provide, once the bus arbiter provides exclusive bus access to the encryption/decryption processor, decrypted data to the second memory.
2. The server system of claim 1, wherein the encryption/decryption processor is further adapted to:
i) encrypt, using the at least one master key, the one or more keys corresponding to the one or more data sessions, and provide the encrypted one or more keys to the memory arbiter for storage to the shared memory, and
ii) retrieve the encrypted one or more keys from the shared memory by way of the memory arbiter, and decrypt, using the at least one master key, the encrypted one or more keys, wherein the decrypted one or more keys are not accessible to modules outside of the key manager.
3. The server system of claim 2, wherein the encryption/decryption engine is further adapted to employ the decrypted one or more keys to decrypt data packets for storage on the storage medium and encrypt data packets for transmission to the one or more playback devices.
4. The server system of claim 2, wherein the second memory of the key manager is adapted to store the decrypted one or more keys.
5. The server system of claim 1, wherein the invention is implemented in a
monolithic integrated circuit chip.
6. The server system of claim 1, wherein the storage medium is a redundant array of inexpensive disks (RAID).
7. The server system of claim 1, wherein
the shared memory is implemented as a double data rate synchronous dynamic random access memory (DDR SDRAM);
the first memory is implemented as a one-time programmable (OTP) memory; and
the second memory is implemented as one of a random access memory (RAM) and a flash memory.
8. A method of processing, by a media server, data packets corresponding to one or more streaming data sessions between the media server and one or more playback devices over at least one network connection, the method comprising:
receiving, by a protocol accelerator, encrypted data packets corresponding to the one or more streaming data sessions, wherein the encrypted data packets include (i) an encrypted device key corresponding to the data session, and (ii) encrypted payload data;
extracting, by the protocol accelerator, the encrypted device key from the received data packet;
providing, by the protocol accelerator, (i) the encrypted device key to a control processor, and (ii) the encrypted data to a memory arbiter for buffering in a shared memory;
configuring, by a bus arbiter, a bus of the media server to allow access by a decryption processor, to the exclusion of the control processor;
retrieving, by the decryption processor via the bus of the media server, a master key of the media server from a non-volatile memory using direct memory access (DMA);
providing, by the control processor, the encrypted device key to the decryption processor;
decrypting the encrypted device key, by the decryption processor, using the master key;
storing, by the decryption processor, the decrypted device key in a volatile memory using DMA;
retrieving, by the decryption processor via the memory arbiter, the payload data from the shared memory;
configuring, by the bus arbiter, the bus to allow access by the control processor to the exclusion of the decryption processor;
decrypting, by the decryption processor, the payload data using the decrypted device key;
providing, by the decryption processor, the decrypted payload data to one of (i) a storage medium and (ii) a network connection.
9. The method of claim 8 further comprising:
allocating, by the memory arbiter, one or more first-in, first-out (FIFO) buffer queues in the shared memory to a given data session;
providing, by the memory arbiter, received payload data to the one or more FIFO buffer queues;
providing, by the memory arbiter, payload data from the one or more FIFO buffer queues to the decryption processor for decryption;
providing, by the decryption processor via the memory arbiter, decrypted payload data to another of the one or more FIFO buffer queues in the shared memory; and
providing, by the memory arbiter, the decrypted payload data to the storage medium.
10. The method of claim 8, further comprising:
for a transmit session, grouping, by at least one of the control processor and the protocol
accelerator, data read from the storage medium into one or more corresponding data packets for transmission over the network connection to one or more playback devices; and
for a receive session, re-assembling, by at least one of the control processor and the protocol accelerator, data packets received by the media server from one of the playback devices into a single media session for storage on the storage medium.
11. The method of claim 10, further comprising:
for receive sessions:
extracting, by the protocol accelerator, one or more header fields of each received data packet;
determining, by the protocol accelerator based on the extracted one or more header fields, a queue identifier (QID) for the received data packet;
routing the received data packet, based on the determined QID, to one of i) the storage medium if the QID corresponds to a previously established data session and ii) the control processor if the QID is not recognized.
12. The method of claim 10, further comprising: for transmit sessions:
generating, by the memory arbiter, a data transfer request to the protocol accelerator;
retrieving, by the protocol accelerator, responsive to the transfer request, transmit data from one
of the storage media and the control processor;
grouping, by the protocol accelerator, the transmit data into one or more data packets;
providing, by the protocol accelerator, the one or more data packets to the network connection.
13. The method of claim 8, wherein the decryption processor operates in accordance with the Advanced Encryption Standard (AES).
14. The method of claim 8, further comprising:
encrypting, by the decryption processor, the decrypted device key using a second master key;
storing, by the decryption processor, the encrypted device key to the storage medium.
15. The method of claim 8, wherein the method is implemented by a machine executing program code encoded on a non-transitory machine-readable storage medium.
US11/539,327 2005-09-13 2006-10-06 Method and apparatus for secure key management and protection Active 2029-02-28 US8218770B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/539,327 US8218770B2 (en) 2005-09-13 2006-10-06 Method and apparatus for secure key management and protection

Applications Claiming Priority (12)

Application Number Priority Date Filing Date Title
US11/226,507 US7599364B2 (en) 2005-09-13 2005-09-13 Configurable network connection address forming hardware
US72506005P 2005-10-07 2005-10-07
US72457305P 2005-10-07 2005-10-07
US72472205P 2005-10-07 2005-10-07
US72446205P 2005-10-07 2005-10-07
US72446405P 2005-10-07 2005-10-07
US72446305P 2005-10-07 2005-10-07
US72469205P 2005-10-07 2005-10-07
US11/273,750 US7461214B2 (en) 2005-11-15 2005-11-15 Method and system for accessing a single port memory
US11/364,979 US20070204076A1 (en) 2006-02-28 2006-02-28 Method and apparatus for burst transfer
US11/384,975 US7912060B1 (en) 2006-03-20 2006-03-20 Protocol accelerator and method of using same
US11/539,327 US8218770B2 (en) 2005-09-13 2006-10-06 Method and apparatus for secure key management and protection

Related Parent Applications (4)

Application Number Title Priority Date Filing Date
US11/226,507 Continuation-In-Part US7599364B2 (en) 2005-09-13 2005-09-13 Configurable network connection address forming hardware
US11/273,750 Continuation-In-Part US7461214B2 (en) 2005-09-13 2005-11-15 Method and system for accessing a single port memory
US11/364,979 Continuation-In-Part US20070204076A1 (en) 2005-09-13 2006-02-28 Method and apparatus for burst transfer
US11/384,975 Continuation-In-Part US7912060B1 (en) 2005-09-13 2006-03-20 Protocol accelerator and method of using same

Publications (2)

Publication Number Publication Date
US20070195957A1 US20070195957A1 (en) 2007-08-23
US8218770B2 true US8218770B2 (en) 2012-07-10

Family

ID=38428207

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/539,327 Active 2029-02-28 US8218770B2 (en) 2005-09-13 2006-10-06 Method and apparatus for secure key management and protection

Country Status (1)

Country Link
US (1) US8218770B2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100064144A1 (en) * 2008-09-10 2010-03-11 Atmel Corporation Data security
US20130262868A1 (en) * 2012-03-28 2013-10-03 Ben-Zion Friedman Shared buffers for processing elements on a network device
US9141814B1 (en) * 2014-06-03 2015-09-22 Zettaset, Inc. Methods and computer systems with provisions for high availability of cryptographic keys
US9503436B1 (en) * 2012-06-07 2016-11-22 Western Digital Technologies, Inc. Methods and systems for NAS device pairing and mirroring
US9734095B2 (en) 2015-09-01 2017-08-15 International Business Machines Corporation Nonvolatile memory data security
US10313236B1 (en) * 2013-12-31 2019-06-04 Sanmina Corporation Method of flow based services for flash storage
US11323479B2 (en) * 2013-07-01 2022-05-03 Amazon Technologies, Inc. Data loss prevention techniques
TWI774986B (en) * 2019-09-09 2022-08-21 新唐科技股份有限公司 Key storage system and key storage method
US20220385598A1 (en) * 2017-02-12 2022-12-01 Mellanox Technologies, Ltd. Direct data placement

Families Citing this family (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7716389B1 (en) * 2006-03-17 2010-05-11 Bitmicro Networks, Inc. Direct memory access controller with encryption and decryption for non-blocking high bandwidth I/O transactions
US8165301B1 (en) 2006-04-04 2012-04-24 Bitmicro Networks, Inc. Input-output device and storage controller handshake protocol using key exchange for data security
US8959307B1 (en) 2007-11-16 2015-02-17 Bitmicro Networks, Inc. Reduced latency memory read transactions in storage devices
US8127131B2 (en) * 2008-04-10 2012-02-28 Telefonaktiebolaget Lm Ericsson (Publ) System and method for efficient security domain translation and data transfer
US8250375B2 (en) * 2008-04-25 2012-08-21 Microsoft Corporation Generating unique data from electronic devices
US8924742B2 (en) * 2009-02-11 2014-12-30 Blackberry Limited Multi-level data storage
US8665601B1 (en) 2009-09-04 2014-03-04 Bitmicro Networks, Inc. Solid state drive with improved enclosure assembly
US9135190B1 (en) * 2009-09-04 2015-09-15 Bitmicro Networks, Inc. Multi-profile memory controller for computing devices
US8447908B2 (en) 2009-09-07 2013-05-21 Bitmicro Networks, Inc. Multilevel memory bus system for solid-state mass storage
US8560804B2 (en) * 2009-09-14 2013-10-15 Bitmicro Networks, Inc. Reducing erase cycles in an electronic storage device that uses at least one erase-limited memory device
DE102009046436A1 (en) * 2009-11-05 2011-05-12 Robert Bosch Gmbh Cryptographic hardware module or method for updating a cryptographic key
US8788842B2 (en) * 2010-04-07 2014-07-22 Apple Inc. System and method for content protection based on a combination of a user PIN and a device specific identifier
US8510552B2 (en) 2010-04-07 2013-08-13 Apple Inc. System and method for file-level data protection
CN102231142B (en) * 2011-07-21 2013-12-11 浙江大学 Multi-channel direct memory access (DMA) controller with arbitrator
US8645681B1 (en) * 2011-09-28 2014-02-04 Emc Corporation Techniques for distributing secure communication secrets
US9372755B1 (en) 2011-10-05 2016-06-21 Bitmicro Networks, Inc. Adaptive power cycle sequences for data recovery
US9043669B1 (en) 2012-05-18 2015-05-26 Bitmicro Networks, Inc. Distributed ECC engine for storage media
US9423457B2 (en) 2013-03-14 2016-08-23 Bitmicro Networks, Inc. Self-test solution for delay locked loops
US10489318B1 (en) 2013-03-15 2019-11-26 Bitmicro Networks, Inc. Scatter-gather approach for parallel data transfer in a mass storage system
US9842024B1 (en) 2013-03-15 2017-12-12 Bitmicro Networks, Inc. Flash electronic disk with RAID controller
US9916213B1 (en) 2013-03-15 2018-03-13 Bitmicro Networks, Inc. Bus arbitration with routing and failover mechanism
US9971524B1 (en) 2013-03-15 2018-05-15 Bitmicro Networks, Inc. Scatter-gather approach for parallel data transfer in a mass storage system
US9430386B2 (en) 2013-03-15 2016-08-30 Bitmicro Networks, Inc. Multi-leveled cache management in a hybrid storage system
US9934045B1 (en) 2013-03-15 2018-04-03 Bitmicro Networks, Inc. Embedded system boot from a storage device
US10120694B2 (en) 2013-03-15 2018-11-06 Bitmicro Networks, Inc. Embedded system boot from a storage device
US9720603B1 (en) 2013-03-15 2017-08-01 Bitmicro Networks, Inc. IOC to IOC distributed caching architecture
US9501436B1 (en) 2013-03-15 2016-11-22 Bitmicro Networks, Inc. Multi-level message passing descriptor
US9672178B1 (en) 2013-03-15 2017-06-06 Bitmicro Networks, Inc. Bit-mapped DMA transfer with dependency table configured to monitor status so that a processor is not rendered as a bottleneck in a system
US9400617B2 (en) 2013-03-15 2016-07-26 Bitmicro Networks, Inc. Hardware-assisted DMA transfer with dependency table configured to permit-in parallel-data drain from cache without processor intervention when filled or drained
US9798688B1 (en) 2013-03-15 2017-10-24 Bitmicro Networks, Inc. Bus arbitration with routing and failover mechanism
US9875205B1 (en) 2013-03-15 2018-01-23 Bitmicro Networks, Inc. Network of memory systems
US9734067B1 (en) 2013-03-15 2017-08-15 Bitmicro Networks, Inc. Write buffering
US10078604B1 (en) 2014-04-17 2018-09-18 Bitmicro Networks, Inc. Interrupt coalescing
US9811461B1 (en) 2014-04-17 2017-11-07 Bitmicro Networks, Inc. Data storage system
US10025736B1 (en) 2014-04-17 2018-07-17 Bitmicro Networks, Inc. Exchange message protocol message transmission between two devices
US10055150B1 (en) 2014-04-17 2018-08-21 Bitmicro Networks, Inc. Writing volatile scattered memory metadata to flash device
US9952991B1 (en) 2014-04-17 2018-04-24 Bitmicro Networks, Inc. Systematic method on queuing of descriptors for multiple flash intelligent DMA engine operation
US10042792B1 (en) 2014-04-17 2018-08-07 Bitmicro Networks, Inc. Method for transferring and receiving frames across PCI express bus for SSD device
US10212138B1 (en) * 2015-06-19 2019-02-19 Amazon Technologies, Inc. Hardware security accelerator
CN105069362B (en) * 2015-06-30 2018-04-20 广东轩辕网络科技股份有限公司 A kind of storage method and device
US20170302438A1 (en) * 2016-04-15 2017-10-19 The Florida International University Board Of Trustees Advanced bus architecture for aes-encrypted high-performance internet-of-things (iot) embedded systems
US10423804B2 (en) * 2016-06-12 2019-09-24 Apple Inc. Cryptographic separation of users
US10552050B1 (en) 2017-04-07 2020-02-04 Bitmicro Llc Multi-dimensional computer storage system
GB201808834D0 (en) * 2018-05-30 2018-07-11 Nordic Semiconductor Asa Memory-efficient hardware cryptographic engine

Citations (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5243596A (en) 1992-03-18 1993-09-07 Fischer & Porter Company Network architecture suitable for multicasting and resource locking
US5371877A (en) 1991-12-31 1994-12-06 Apple Computer, Inc. Apparatus for alternatively accessing single port random access memories to implement dual port first-in first-out memory
US5553269A (en) 1990-09-19 1996-09-03 International Business Machines Corporation Apparatus for monitoring sensor information from diffeerent types of sources
US5659687A (en) 1995-11-30 1997-08-19 Electronics & Telecommunications Research Institute Device for controlling memory data path in parallel processing computer system
US5684954A (en) 1993-03-20 1997-11-04 International Business Machine Corp. Method and apparatus for providing connection identifier by concatenating CAM's addresses at which containing matched protocol information extracted from multiple protocol header
US5937169A (en) 1997-10-29 1999-08-10 3Com Corporation Offload of TCP segmentation to a smart adapter
US5974482A (en) 1996-09-20 1999-10-26 Honeywell Inc. Single port first-in-first-out (FIFO) device having overwrite protection and diagnostic capabilities
US6233224B1 (en) 1997-09-25 2001-05-15 Sony Computer Laboratory, Inc. Communication method and data communications terminal, with data communication protocol for inter-layer flow control
US20020038379A1 (en) 2000-09-28 2002-03-28 Fujitsu Limited Routing apparatus
US20020080780A1 (en) 2000-08-10 2002-06-27 Mccormick James S. Buffering system for use in a communication switch that includes a multiprocessor control block and method therefore
US6434651B1 (en) 1999-03-01 2002-08-13 Sun Microsystems, Inc. Method and apparatus for suppressing interrupts in a high-speed network environment
US6449656B1 (en) 1999-07-30 2002-09-10 Intel Corporation Storing a frame header
US6453394B2 (en) 1997-10-03 2002-09-17 Matsushita Electric Industrial Co., Ltd. Memory interface device and memory address generation device
US20020194363A1 (en) 2001-06-14 2002-12-19 Cypress Semiconductor Corp. Programmable protocol processing engine for network packet devices
US20030041163A1 (en) * 2001-02-14 2003-02-27 John Rhoades Data processing architectures
US20030067934A1 (en) 2001-09-28 2003-04-10 Hooper Donald F. Multiprotocol decapsulation/encapsulation control structure and packet protocol conversion method
US20030086395A1 (en) 2001-11-07 2003-05-08 Vyankatesh Shanbhag System and method for efficient handover in wireless packet data network
US20030112802A1 (en) * 2001-11-16 2003-06-19 Nec Corporation Packet transfer method and apparatus
US6643259B1 (en) 1999-11-12 2003-11-04 3Com Corporation Method for optimizing data transfer in a data network
US20040019789A1 (en) * 2002-07-29 2004-01-29 Buer Mark L. System and method for cryptographic control of system configurations
US6697868B2 (en) 2000-02-28 2004-02-24 Alacritech, Inc. Protocol processing stack for use with intelligent network interface device
US20040042483A1 (en) 2002-08-30 2004-03-04 Uri Elzur System and method for TCP offload
US20040133713A1 (en) 2002-08-30 2004-07-08 Uri Elzur Method and system for data placement of out-of-order (OOO) TCP segments
US20040148512A1 (en) * 2003-01-24 2004-07-29 Samsung Electronics Co., Ltd. Cryptographic apparatus for supporting multiple modes
US20040153578A1 (en) 2002-03-08 2004-08-05 Uri Elzur System and method for handling transport protocol segments
US20040153642A1 (en) * 2002-05-14 2004-08-05 Serge Plotkin Encryption based security system for network storage
US20040165538A1 (en) 2003-02-21 2004-08-26 Swami Yogesh Prem System and method for movement detection and congestion response for transport layer protocol
US6788704B1 (en) 1999-08-05 2004-09-07 Intel Corporation Network adapter with TCP windowing support
US20040249957A1 (en) 2003-05-12 2004-12-09 Pete Ekis Method for interface of TCP offload engines to operating systems
US20050021680A1 (en) 2003-05-12 2005-01-27 Pete Ekis System and method for interfacing TCP offload engines using an interposed socket library
US6868459B1 (en) 2001-10-19 2005-03-15 Lsi Logic Corporation Methods and structure for transfer of burst transactions having unspecified length
US6876941B2 (en) 2001-04-12 2005-04-05 Arm Limited Testing compliance of a device with a bus protocol
US6885673B1 (en) 2001-05-21 2005-04-26 Advanced Micro Devices, Inc. Queue pair wait state management in a host channel adapter
US20050108555A1 (en) * 1999-12-22 2005-05-19 Intertrust Technologies Corporation Systems and methods for protecting data secrecy and integrity
US6901072B1 (en) 2003-05-15 2005-05-31 Foundry Networks, Inc. System and method for high speed packet transmission implementing dual transmit and receive pipelines
US20050132226A1 (en) * 2003-12-11 2005-06-16 David Wheeler Trusted mobile platform architecture
US6920510B2 (en) 2002-06-05 2005-07-19 Lsi Logic Corporation Time sharing a single port memory among a plurality of ports
US20050165985A1 (en) 2003-12-29 2005-07-28 Vangal Sriram R. Network protocol processor
US6938097B1 (en) 1999-07-02 2005-08-30 Sonicwall, Inc. System for early packet steering and FIFO-based management with priority buffer support
US20050213768A1 (en) * 2004-03-24 2005-09-29 Durham David M Shared cryptographic key in networks with an embedded agent
US20060085652A1 (en) * 2004-10-20 2006-04-20 Zimmer Vincent J Data security
US7035291B2 (en) 2001-05-02 2006-04-25 Ron Grinfeld TCP transmission acceleration
US20060089994A1 (en) * 2002-03-05 2006-04-27 Hayes John W Concealing a network connected device
US7085866B1 (en) 2002-02-19 2006-08-01 Hobson Richard F Hierarchical bus structure and memory access protocol for multiprocessor systems
US20060179309A1 (en) * 2005-02-07 2006-08-10 Microsoft Corporation Systems and methods for managing multiple keys for file encryption and decryption
EP1691526A1 (en) 2005-02-11 2006-08-16 Samsung Electronics Co., Ltd. Transmission control protocol (TCP) congestion control using multiple TCP acknowledgements (ACKs)
US20060218527A1 (en) * 2005-03-22 2006-09-28 Gururaj Nagendra Processing secure metadata at wire speed
US20060227811A1 (en) * 2005-04-08 2006-10-12 Hussain Muhammad R TCP engine
US20060288235A1 (en) * 2005-06-17 2006-12-21 Fujitsu Limited Secure processor and system
US7185266B2 (en) 2003-02-12 2007-02-27 Alacritech, Inc. Network interface device for error detection using partial CRCS of variable length message portions
US7236492B2 (en) 2001-11-21 2007-06-26 Alcatel-Lucent Canada Inc. Configurable packet processor
US7287102B1 (en) 2003-01-31 2007-10-23 Marvell International Ltd. System and method for concatenating data
US20080019529A1 (en) * 2004-01-16 2008-01-24 Kahn Raynold M Distribution of video content using client to host pairing of integrated receivers/decoders

Patent Citations (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5553269A (en) 1990-09-19 1996-09-03 International Business Machines Corporation Apparatus for monitoring sensor information from diffeerent types of sources
US5371877A (en) 1991-12-31 1994-12-06 Apple Computer, Inc. Apparatus for alternatively accessing single port random access memories to implement dual port first-in first-out memory
US5243596A (en) 1992-03-18 1993-09-07 Fischer & Porter Company Network architecture suitable for multicasting and resource locking
US5684954A (en) 1993-03-20 1997-11-04 International Business Machine Corp. Method and apparatus for providing connection identifier by concatenating CAM's addresses at which containing matched protocol information extracted from multiple protocol header
US5659687A (en) 1995-11-30 1997-08-19 Electronics & Telecommunications Research Institute Device for controlling memory data path in parallel processing computer system
US5974482A (en) 1996-09-20 1999-10-26 Honeywell Inc. Single port first-in-first-out (FIFO) device having overwrite protection and diagnostic capabilities
US6233224B1 (en) 1997-09-25 2001-05-15 Sony Computer Laboratory, Inc. Communication method and data communications terminal, with data communication protocol for inter-layer flow control
US6732252B2 (en) 1997-10-03 2004-05-04 Matsushita Electric Industrial Co., Ltd. Memory interface device and memory address generation device
US6453394B2 (en) 1997-10-03 2002-09-17 Matsushita Electric Industrial Co., Ltd. Memory interface device and memory address generation device
US5937169A (en) 1997-10-29 1999-08-10 3Com Corporation Offload of TCP segmentation to a smart adapter
US6434651B1 (en) 1999-03-01 2002-08-13 Sun Microsystems, Inc. Method and apparatus for suppressing interrupts in a high-speed network environment
US6938097B1 (en) 1999-07-02 2005-08-30 Sonicwall, Inc. System for early packet steering and FIFO-based management with priority buffer support
US6449656B1 (en) 1999-07-30 2002-09-10 Intel Corporation Storing a frame header
US6788704B1 (en) 1999-08-05 2004-09-07 Intel Corporation Network adapter with TCP windowing support
US6643259B1 (en) 1999-11-12 2003-11-04 3Com Corporation Method for optimizing data transfer in a data network
US20050108555A1 (en) * 1999-12-22 2005-05-19 Intertrust Technologies Corporation Systems and methods for protecting data secrecy and integrity
US6697868B2 (en) 2000-02-28 2004-02-24 Alacritech, Inc. Protocol processing stack for use with intelligent network interface device
US20020080780A1 (en) 2000-08-10 2002-06-27 Mccormick James S. Buffering system for use in a communication switch that includes a multiprocessor control block and method therefore
US20020038379A1 (en) 2000-09-28 2002-03-28 Fujitsu Limited Routing apparatus
US20030041163A1 (en) * 2001-02-14 2003-02-27 John Rhoades Data processing architectures
US6876941B2 (en) 2001-04-12 2005-04-05 Arm Limited Testing compliance of a device with a bus protocol
US7035291B2 (en) 2001-05-02 2006-04-25 Ron Grinfeld TCP transmission acceleration
US6885673B1 (en) 2001-05-21 2005-04-26 Advanced Micro Devices, Inc. Queue pair wait state management in a host channel adapter
US20020194363A1 (en) 2001-06-14 2002-12-19 Cypress Semiconductor Corp. Programmable protocol processing engine for network packet devices
US20030067934A1 (en) 2001-09-28 2003-04-10 Hooper Donald F. Multiprotocol decapsulation/encapsulation control structure and packet protocol conversion method
US6868459B1 (en) 2001-10-19 2005-03-15 Lsi Logic Corporation Methods and structure for transfer of burst transactions having unspecified length
US20030086395A1 (en) 2001-11-07 2003-05-08 Vyankatesh Shanbhag System and method for efficient handover in wireless packet data network
US20030112802A1 (en) * 2001-11-16 2003-06-19 Nec Corporation Packet transfer method and apparatus
US7236492B2 (en) 2001-11-21 2007-06-26 Alcatel-Lucent Canada Inc. Configurable packet processor
US7085866B1 (en) 2002-02-19 2006-08-01 Hobson Richard F Hierarchical bus structure and memory access protocol for multiprocessor systems
US20060089994A1 (en) * 2002-03-05 2006-04-27 Hayes John W Concealing a network connected device
US20040153578A1 (en) 2002-03-08 2004-08-05 Uri Elzur System and method for handling transport protocol segments
US20040153642A1 (en) * 2002-05-14 2004-08-05 Serge Plotkin Encryption based security system for network storage
US6920510B2 (en) 2002-06-05 2005-07-19 Lsi Logic Corporation Time sharing a single port memory among a plurality of ports
US20040019789A1 (en) * 2002-07-29 2004-01-29 Buer Mark L. System and method for cryptographic control of system configurations
US20040133713A1 (en) 2002-08-30 2004-07-08 Uri Elzur Method and system for data placement of out-of-order (OOO) TCP segments
US20040042483A1 (en) 2002-08-30 2004-03-04 Uri Elzur System and method for TCP offload
US20040148512A1 (en) * 2003-01-24 2004-07-29 Samsung Electronics Co., Ltd. Cryptographic apparatus for supporting multiple modes
US7287102B1 (en) 2003-01-31 2007-10-23 Marvell International Ltd. System and method for concatenating data
US7185266B2 (en) 2003-02-12 2007-02-27 Alacritech, Inc. Network interface device for error detection using partial CRCS of variable length message portions
US20040165538A1 (en) 2003-02-21 2004-08-26 Swami Yogesh Prem System and method for movement detection and congestion response for transport layer protocol
US20040249957A1 (en) 2003-05-12 2004-12-09 Pete Ekis Method for interface of TCP offload engines to operating systems
US20050021680A1 (en) 2003-05-12 2005-01-27 Pete Ekis System and method for interfacing TCP offload engines using an interposed socket library
US6901072B1 (en) 2003-05-15 2005-05-31 Foundry Networks, Inc. System and method for high speed packet transmission implementing dual transmit and receive pipelines
US20050132226A1 (en) * 2003-12-11 2005-06-16 David Wheeler Trusted mobile platform architecture
US20050165985A1 (en) 2003-12-29 2005-07-28 Vangal Sriram R. Network protocol processor
US20080019529A1 (en) * 2004-01-16 2008-01-24 Kahn Raynold M Distribution of video content using client to host pairing of integrated receivers/decoders
US20050213768A1 (en) * 2004-03-24 2005-09-29 Durham David M Shared cryptographic key in networks with an embedded agent
US20060085652A1 (en) * 2004-10-20 2006-04-20 Zimmer Vincent J Data security
US20060179309A1 (en) * 2005-02-07 2006-08-10 Microsoft Corporation Systems and methods for managing multiple keys for file encryption and decryption
EP1691526A1 (en) 2005-02-11 2006-08-16 Samsung Electronics Co., Ltd. Transmission control protocol (TCP) congestion control using multiple TCP acknowledgements (ACKs)
US20060218527A1 (en) * 2005-03-22 2006-09-28 Gururaj Nagendra Processing secure metadata at wire speed
US20060227811A1 (en) * 2005-04-08 2006-10-12 Hussain Muhammad R TCP engine
US20060288235A1 (en) * 2005-06-17 2006-12-21 Fujitsu Limited Secure processor and system

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
ARM, AMBA AXI Protocol v1.0 Specification, 2003 108 pages.
ARM, AMBA Specification, Rev. 2.0, ARM Ltd., 1999, 230 pages.
ARM926EJ-S Technical Reference Manual, ARM Ltd., 2001, 200 pages.
Christoforos, "Analysis of a Reconfigurable Network Processor", 2006, Parallel and Distributed Processing Symposium. *
Information Sciences Institute, University of Southern California, Transmission Control Protocol Darpa Internet Program Protocol Specification, Sep. 1981, pp. 1-88, Marina del Rey, CA.
Specification for the Advanced Encryption Standard (AES), Federal Information Processing Standard (FIPS) Publication 197, (2001).
Tanenbaum, Andrew S., Structured Computer Organization, 1984, Prentice-Hall, Inc. pp. 10-12.

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8782433B2 (en) * 2008-09-10 2014-07-15 Inside Secure Data security
US20100064144A1 (en) * 2008-09-10 2010-03-11 Atmel Corporation Data security
US9973335B2 (en) * 2012-03-28 2018-05-15 Intel Corporation Shared buffers for processing elements on a network device
US20130262868A1 (en) * 2012-03-28 2013-10-03 Ben-Zion Friedman Shared buffers for processing elements on a network device
US9503436B1 (en) * 2012-06-07 2016-11-22 Western Digital Technologies, Inc. Methods and systems for NAS device pairing and mirroring
US11323479B2 (en) * 2013-07-01 2022-05-03 Amazon Technologies, Inc. Data loss prevention techniques
US10313236B1 (en) * 2013-12-31 2019-06-04 Sanmina Corporation Method of flow based services for flash storage
US9141814B1 (en) * 2014-06-03 2015-09-22 Zettaset, Inc. Methods and computer systems with provisions for high availability of cryptographic keys
US9912473B2 (en) 2014-06-03 2018-03-06 Zettaset, Inc. Methods and computer systems with provisions for high availability of cryptographic keys
US9760504B2 (en) 2015-09-01 2017-09-12 International Business Machines Corporation Nonvolatile memory data security
US9734095B2 (en) 2015-09-01 2017-08-15 International Business Machines Corporation Nonvolatile memory data security
US20220385598A1 (en) * 2017-02-12 2022-12-01 Mellanox Technologies, Ltd. Direct data placement
TWI774986B (en) * 2019-09-09 2022-08-21 新唐科技股份有限公司 Key storage system and key storage method

Also Published As

Publication number Publication date
US20070195957A1 (en) 2007-08-23

Similar Documents

Publication Publication Date Title
US8218770B2 (en) Method and apparatus for secure key management and protection
US8521955B2 (en) Aligned data storage for network attached media streaming systems
US7397797B2 (en) Method and apparatus for performing network processing functions
US7924868B1 (en) Internet protocol (IP) router residing in a processor chipset
US7634650B1 (en) Virtualized shared security engine and creation of a protected zone
US7290134B2 (en) Encapsulation mechanism for packet processing
US20050060538A1 (en) Method, system, and program for processing of fragmented datagrams
US7698541B1 (en) System and method for isochronous task switching via hardware scheduling
US7320071B1 (en) Secure universal serial bus
US7362772B1 (en) Network processing pipeline chipset for routing and host packet processing
US7995759B1 (en) System and method for parallel compression of a single data stream
US20130086332A1 (en) Task Queuing in a Multi-Flow Network Processor Architecture
US7188250B1 (en) Method and apparatus for performing network processing functions
US20040123123A1 (en) Methods and apparatus for accessing security association information in a cryptography accelerator
US8438641B2 (en) Security protocol processing for anti-replay protection
US8744080B2 (en) Encrypted data recording apparatus
WO2009045586A2 (en) Encoded digital video content protection between transport stream processor and decoder
US10031758B2 (en) Chained-instruction dispatcher
US20220201020A1 (en) Dynamic adaption of arw management with enhanced security
US7603549B1 (en) Network security protocol processor and method thereof
US9804959B2 (en) In-flight packet processing
US7610444B2 (en) Method and apparatus for disk address and transfer size management
US20060013397A1 (en) Channel adapter managed trusted queue pairs
KR20050094729A (en) Content data processing device and method
ZERDS et al. gi ‘i ‘lcsllfégrloigfg J'LByme’(58) Field of Classi? cation Search

Legal Events

Date Code Title Description
AS Assignment

Owner name: AGERE SYSTEMS INC., PENNSYLVANIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ARULAMBALAM, AMBALAVANAR;CLUNE, DAVID E;HEINTZE, NEVIN C.;AND OTHERS;REEL/FRAME:020120/0524;SIGNING DATES FROM 20070312 TO 20070314

Owner name: AGERE SYSTEMS INC., PENNSYLVANIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ARULAMBALAM, AMBALAVANAR;CLUNE, DAVID E;HEINTZE, NEVIN C.;AND OTHERS;SIGNING DATES FROM 20070312 TO 20070314;REEL/FRAME:020120/0524

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AG

Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:LSI CORPORATION;AGERE SYSTEMS LLC;REEL/FRAME:032856/0031

Effective date: 20140506

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AGERE SYSTEMS LLC;REEL/FRAME:035365/0634

Effective date: 20140804

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: AGERE SYSTEMS LLC, PENNSYLVANIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS (RELEASES RF 032856-0031);ASSIGNOR:DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT;REEL/FRAME:037684/0039

Effective date: 20160201

Owner name: LSI CORPORATION, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS (RELEASES RF 032856-0031);ASSIGNOR:DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT;REEL/FRAME:037684/0039

Effective date: 20160201

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001

Effective date: 20160201

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001

Effective date: 20160201

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001

Effective date: 20170119

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001

Effective date: 20170119

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047230/0133

Effective date: 20180509

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE EFFECTIVE DATE OF MERGER TO 09/05/2018 PREVIOUSLY RECORDED AT REEL: 047230 FRAME: 0133. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047630/0456

Effective date: 20180905

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

AS Assignment

Owner name: BROADCOM INTERNATIONAL PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED;REEL/FRAME:053771/0901

Effective date: 20200826

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED, SINGAPORE

Free format text: MERGER;ASSIGNORS:AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED;BROADCOM INTERNATIONAL PTE. LTD.;REEL/FRAME:062952/0850

Effective date: 20230202

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY