US20090309896A1 - Multi Instance Unified Shader Engine Filtering System With Level One and Level Two Cache - Google Patents

Multi Instance Unified Shader Engine Filtering System With Level One and Level Two Cache Download PDF

Info

Publication number
US20090309896A1
US20090309896A1 US12/476,202 US47620209A US2009309896A1 US 20090309896 A1 US20090309896 A1 US 20090309896A1 US 47620209 A US47620209 A US 47620209A US 2009309896 A1 US2009309896 A1 US 2009309896A1
Authority
US
United States
Prior art keywords
shader
texel data
level
texture
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/476,202
Inventor
Anthony P. DeLaurier
Mark Leather
Robert S. Hartog
Michael J. Mantor
Mark C. Fowler
Jeffrey T. Brady
Marcos P. Zini
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced Micro Devices Inc
Original Assignee
Advanced Micro Devices Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Micro Devices Inc filed Critical Advanced Micro Devices Inc
Priority to US12/476,202 priority Critical patent/US20090309896A1/en
Assigned to ADVANCED MICRO DEVICES, INC. reassignment ADVANCED MICRO DEVICES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEATHER, MARK, BRADY, JEFFREY T., ZINI, MARCOS P., FOWLER, MARK C., MANTOR, MICHAEL J., DELAURIER, ANTHONY P., HARTOG, ROBERT S.
Publication of US20090309896A1 publication Critical patent/US20090309896A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/363Graphics controllers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/06Use of more than one graphics processor to process data before displaying to one or more screens

Definitions

  • the present invention is generally directed to computing operations performed in computing systems, and more particularly directed to graphics processing tasks performed in computing systems.
  • a graphics processing unit is a complex integrated circuit that is specially designed to perform graphics processing tasks.
  • a GPU can, for example, execute graphics processing tasks required by an end-user application, such as a video game application. In such an example, there are several layers of software between the end-user application and the GPU.
  • the end-user application communicates with an application programming interface (API).
  • API allows the end-user application to output graphics data and commands in a standardized format, rather than in a format that is dependent on the GPU.
  • Several types of APIs are commercially available, including DirectX® developed by Microsoft Corp. and OpenGL® developed by Silicon Graphics, Inc.
  • the API communicates with a driver.
  • the driver translates standard code received from the API into a native format of instructions understood by the GPU.
  • the driver is typically written by the manufacturer of the GPU.
  • the GPU then executes the instructions from the driver.
  • a GPU produces the pixels that make up an image from a higher level description of its components in a process known as rendering.
  • GPU's typically utilize a concept of continuous rendering by the use of pipelines to processes pixel, texture, and geometric data. These pipelines are often referred to as a collection of fixed function special purpose pipelines such as rasterizers, setup engines, color blenders, hieratical depth, texture mapping and programmable stages that can be accomplished in shader pipes or shader pipelines, “shader” being a term in computer graphics referring to a set of software instructions used by a graphic resource primarily to perform rendering effects.
  • GPU's can also employ multiple pipelines in a parallel processing design to obtain higher throughput. A multiple of shader pipelines can also be referred to as a shader pipe array.
  • Texture mapping is a method used to determine the texture color for a texture mapped pixel through the use of the colors of nearby pixels of the texture, or texels. The process is also referred to as texture smoothing or texture interpolation.
  • texture smoothing or texture interpolation.
  • high image quality texture mapping requires a high degree of computational complexity.
  • Texture filters rely on high speed access to local cache memory for pixel data. However, the use of dedicated local cache memory for texture filters typically precludes the use of more general purpose shared memory.
  • the present invention includes method and apparatus for a multiple instance shader engine filtering system wherein where each shader engine comprises multiple rows of shader engine filters combined with level one and level two cache systems.
  • Each unified shader engine filter row comprises a shader pipe array, a texture filter, with access to a level one cache system and a level two cache.
  • the shader pipe array accepts instructions from an executing shader program, including input, output, ALU, and texture or general memory load/store requests with addresses data from register files in the shader pipes and program constants to generate the return texel or memory data based on state data controlling the pipelined address and filtering operations for a specific pixel, vertex, primitive, surface or general compute thread.
  • the texture mapping unit retrieves texel data stored in a level one cache system, with the ability to read and write to and from a level two cache system, and through formatting and bilinear filtering interpolations generates a formatted bilinear result based on the specific pixel's neighboring texels. Utilizing multiple rows of shader engine filters within a shader engine allows for the parallel processing of multiple simultaneous resource requests. Utilizing multiple shader engines allows for greater processing through the use of multiple simultaneous processing.
  • each unified shader engine filter row, within each shader engine further comprises a redundant shader pipe system.
  • the redundant shader pipe system is configured to process shader pipe data originally destined to a defective shader pipe in the shader pipe array of the same row.
  • a redundant shader switch transfers shader pipe data originally destined to a defective shader pipe to the redundant shader pipe system for processing.
  • the redundant shader switch places the processed shader pipe data at the correct column of output data at the appropriate time.
  • An error within the shader pipe array can be static or intermittent, and could be caused, for example, because of a manufacturing or post-manufacturing defect, component degradation, external interference, and/or inadvertent static discharge, or other electrical or environmental condition or occurrence.
  • the redundant shader pipe enables the recovery of devices with a defective sub-circuit.
  • each texture mapping unit within a unified shader engine filter row, in each shader engine further comprises a pre-formatter module, an interpolator module, an accumulator module, and a format module.
  • the pre-formatter module is configured to receive texel data and convert it to a normalized fixed point format.
  • the interpolator module is configured to perform an interpolation on the normalized fixed point texel data from the pre-formatter module and generate re-normalized floating point texel data.
  • the accumulator module is configured to accumulate floating point texel data from the interpolator module to achieve the desired level of bilinear, trilinear, and anisoptropic filtering.
  • the format module is configured to convert texel data from the accumulator module into a standard floating point representation.
  • a level one cache system is configured with dual access so that two shader engine filters have access to a single level one cache system.
  • more than one level two cache systems are configured to be accessible by other resources.
  • the communication between a level one cache system and a level two cache systems utilizes more than one memory channel thereby resulting in a greater data throughput.
  • one or more level one cache systems can allocate defined areas of memory to be shared amongst other resources, including other level one cache systems. In certain instances this approach will allow for quicker fetch times of texel data where the required data has already been moved from a level two cache system to a level one cache system.
  • FIG. 1 is a system diagram depicting an implementation of single shader engine filtering system with a single unified shader engine filter row.
  • FIG. 2 is a system diagram depicting an implementation of a single unified shader engine filtering system with multiple rows of shader engine filters, and a level one and level two cache system.
  • FIG. 3 is a system diagram depicting an implementation of a single unified shader engine filtering system with a dual shader engines, dual ported level one cache system, and a level two cache system.
  • FIG. 4 is a system diagram depicting an implementation of a quad unified shader engine filtering system with quad shader engines, dual ported level one cache and a level two cache system.
  • FIG. 5 is a flowchart depicting an implementation of a method for a scalable shader engine filtering system.
  • the present invention relates to a multiple instance shader engine filtering system wherein where each shader engine comprises multiple rows of shader engine filters combined with level one and level two cache systems.
  • Each unified shader engine filter row accepts texture requests for a specified pixel from a resource and performs rendering and texture calculations, outputting texel data.
  • bilinear texture filtering, trilinear texture filtering, and anisotropic texture mapping are applied to texel data stored in a multi level cache system.
  • a redundant shader system can be added and configured to each unified shader engine filter row to effectively repair defective shader pipes within the shader pipe array of the same row.
  • a unified shader module can be reserved as a redundant subsystem and a data destined for a defective unified shader model can be sent to the redundant unified shader module. This increases the portion of the device that is covered by repair significantly due to the inclusion of texture mapping unit and L 1 cache system and thus significantly improves on the yield of such device.
  • references in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described can include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of one skilled in the art to incorporate such a feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • FIG. 1 is an illustration of a single shader engine filtering system 100 according to an embodiment of the present invention.
  • System 100 comprises a single unified shader engine filter row 110 , sequencer 130 , level one cache system 120 , and level two cache system 140 .
  • Unified shader engine filter row 110 comprises a shader pipe array 112 , redundant shader pipe array 114 , texture mapping unit 116 , and address generator module 118 .
  • Shader pipe array 112 performs rendering calculation on input data.
  • Sequencer 130 controls the flow of data through shader pipe array 112 .
  • sequencer 130 identifies defective shader pipes that occur within shader pipe array 112 .
  • shader pipe data originally destined to a defective shader pipe is transferred, via a direct horizontal fetch path, from shader pipe array 112 to the redundant shader pipe array 114 .
  • Redundant shader pipe array 114 is responsible to process the shader pipe data originally destined to the defective shader. Once the shader pipe data is processed, it is returned to the output stream of shader pipe array 112 .
  • Shader pipe array 112 can also issue a texture request to texture mapping unit 116 .
  • texture mapping unit 116 generates appropriate addresses to the level one cache system 120 that contains texel data associated with pixels.
  • the level one cache system 120 after receiving an address, will return the associated texel data to texture mapping unit 116 .
  • a request is sent to the level two cache system 140 in order to retrieve the required texel data.
  • texture mapping unit 116 Upon receipt of texel data, texture mapping unit 116 , through formatting and bilinear filtering interpolations, generates a formatted bilinear result based on the specific pixel's neighboring texels. Texture mapping unit 116 comprises pre-formatter module 116 -A, interpolator module 116 -B, accumulator module 116 -C, and format module 116 -D. The texture mapping unit 116 receives a request from shader arrays 112 and 114 and sequencer 130 respectively, and processes the instruction in the address generator 118 to determine the actual addresses to service.
  • Resulting texel data received from the level one cache system 120 and, after the data is processed in pre-formatter 116 -A, interpolator module 116 -B, accumulator module 116 -C, and format module 116 -D, is sent back to the requesting resource in shader pipe array, 112 and/or redundant shader pipe array 114 .
  • Pre-formatter module 116 -A is configured to receive texel data from pre-formatted 116 -A and perform one or more interpolations, each of which are accumulated in accumulator module 116 -C, to achieve the desired level of bilinear, trilinear, and anisotropic texture filtering.
  • Format module 116 -D converts the accumulated texel data in accumulator module 116 -C to a standard floating point representation for the requesting resource, shader pipe array 112 .
  • FIG. 2 is an illustration of a single shader engine filtering system 200 with multiple rows of shader engine filters, distributed level one cache systems, and a centralized level two cache system according to an embodiment of the present invention.
  • shader engine filtering system 200 comprises a single shader engine 230 , a redundant shader switch shown as redundant shader switch input (RSS-In) 210 and a redundant shader switch output (RSS-Out) 220 , a sequencer 130 , and a level two cache system represented as 140 - 1 through 140 -M, where M is a positive integer.
  • Shader engine 230 comprises one or more shader filter rows represented as 110 - 1 through 110 -N, and associated level one cache systems represented as 120 - 1 through 120 -N, where N is a positive integer that is not necessarily equal to M.
  • the level two cache system is designed as a centralized system given that the level two cache system 140 is not specifically associated with any particular unified shader engine filter row, but rather, can be accessed by any level one cache system 120 - 1 through 120 -N via wide channel memory bus 240 . While the embodiment shown in FIG. 2 illustrates the level two cache system as comprising more than one block, illustrated by 140 - 1 through 140 -M, the level two cache system could comprise a single system.
  • RSS-In 210 controls the flow of input data to shader pipe array 1 , 112 - 1 .
  • Sequencer 130 controls the flow of data through the shader pipe arrays 112 - 1 to 112 -N, as well as identifying defective shader pipes that occur within shader pipe array 1 through shader pipe array N, 112 - 1 through 112 -N. In the event that there are no defective shader pipes, the processed data continues through RSS-Out 220 .
  • sequencer 130 replaces a defective shader pipe in a respective unified shader module detects a defective shader pipe in a respective shader pipe arrays 1 through N, 112 - 1 through 112 -N, sequencer 130 replaces a defective shader pipe by notifying RSS-In 210 of the location of the defective shader pipe.
  • RSS-In 210 transfers the shader pipe data that was destined to a defective shader pipe via a direct horizontal data path from the shader pipe array to the associated redundant shader pipe array.
  • RSS-In 210 will transfer the shader pipe data that was destined to the defective shader pipe from shader pipe array 2 , 112 - 2 , to redundant shader pipe system 2 , 114 - 2 . Redundant shader pipe array 2 , 114 - 2 , is responsible to process the shader pipe data. Once the shader pipe data is processed it is returned to RSS-Out 220 , which places the processed shader pipe data at the correct location and at the proper time as it would have been if the shader pipe had not been found to be defective.
  • FIG. 3 is an illustration of a dual shader engine filtering system 300 with multiple rows of shader engine filters in each shader engine, dual ported level one cache system, and a centralized level two cache system according to an embodiment of the present invention.
  • dual shader engine filtering system 300 comprises two single shader engines, shader engine 230 - 1 and shader engine 230 - 1 , a dual ported level one cache system 310 , and distributed level two cache system 140 connected to level one cache system 310 via wide channel memory bus 240 .
  • Each shader engine comprises multiple rows of shader engine filters as more fully described in FIG. 2 , but are illustrated in FIG. 3 within a single block.
  • Shader pipe arrays 1 through N for shader engine 230 - 1 are illustrated by block 312 - 1 , labeled shader pipe arrays 1 L-NL, where N is a positive integer and “L” indicates the “left” shader engine.
  • redundant shader pipe systems 1 L-NL, 314 - 1 , and texture mapping unit 1 L-NL, 316 - 1 are both represented as an array of N elements.
  • shader engine 230 - 1 operates as the shader engine in FIG. 2 is described.
  • the second shader 230 - 2 on the right of FIG. 3 is similarly configured and described.
  • each shader engine is configured with its own redundant shader switch and sequencer.
  • a single sequencer and redundant shader switch could be configured to accomplish essentially equivalent results.
  • a dual-ported level one cache system 310 is configured.
  • a dedicated level one cache system could be configured for each shader engine.
  • FIG. 4 illustrates the scalability of the design and is an illustration of a quad shader engine filtering system 400 with multiple rows of shader engine filters in each shader engine, two dual ported level one cache systems and two centralized level two cache systems according to an embodiment of the present invention.
  • quad shader engine filtering system 400 comprises four single shader engines, shader engine 230 - 1 , shader engine 230 - 2 , shader engine 230 - 3 , and shader engine 230 - 4 .
  • Each shader engine comprises shader pipe arrays ( 312 - 1 , 312 - 2 , 312 - 3 , 312 - 4 ), redundant shader pipe systems ( 314 - 1 , 314 - 2 , 314 - 3 , 314 - 4 ), texture filters ( 316 - 1 , 316 - 2 , 316 - 3 , 316 - 4 ).
  • the upper shader engines, 230 - 1 and 230 - 2 are configured to support dual ported level one cache system 310 - 1 while the lower shader engines, 230 - 3 and 230 - 4 , are configured to support dual ported level one cache system 310 - 2 .
  • the lower shader engines, 230 - 3 and 230 - 4 are configured to support dual ported level one cache system 310 - 2 .
  • the upper level one cache system 310 - 1 has direct access to level two cache system 410 - 1 through 410 -M
  • the lower level one cache system 310 - 2 has direct access to level two cache system 412 - 1 through 412 -M.
  • the two level one cache systems, 310 - 1 and 310 - 2 are also interconnected.
  • FIG. 4 illustrates the functionality of the redundant shader switch in a single block, here shown as redundant shader switch In/Out 420 - 1 supporting shader engine 230 - 1 , redundant shader switch In/Out 420 - 2 supporting shader engine 230 - 2 , redundant shader switch In/Out 422 - 1 supporting shader engine 230 - 3 , and redundant shader switch In/Out 422 - 2 supporting shader engine 230 - 4 .
  • the operation and functionality of the redundant shader switch illustrated in FIG. 4 is essentially equivalent to that described in FIG. 2 and FIG. 3 .
  • the functionality of each shader engine is as described in the previous figures.
  • FIG. 4 illustrates sequencer 130 - 1 supporting both shader engines 230 - 1 and 230 - 3 and sequencer 130 - 2 supporting both shader engines 230 - 2 and 230 - 4 .
  • a single sequencer could support all of the shader engines.
  • each shader engine could be supported by its own sequencer.
  • the embodiment shown in FIG. 4 illustrating the redundant shader switch could also be accomplished with a single redundant shader switch, or with dedicated shader switches for each shader engine.
  • FIG. 5 is a flowchart depicting a method 500 for scalable shader engine filtering.
  • Method 500 begins at step 502 .
  • multiple texture fetch instructions can be received by multiple unified shader engines in parallel.
  • the texture requests are allocated amongst shader engines. Allocation of the texture requests can be done using a plurality of methods, including for example, load balancing, and/or prioritization.
  • each shader engine generates a source set of addresses based on the shader program instructions for a specified set of pixels, vertices, primitives, surfaces, or compute work items.
  • each shader engine in parallel, can retrieve texel data from a level one cache system.
  • each unified shader engine calculates a formatted interpolation for each set of texel data using a texture filter.
  • Method 500 concludes at step 512 .
  • FIGS. 1 , 2 , 3 , 4 , and 5 can be implemented in software, firmware, or hardware, or using any combination thereof. If programmable logic is used, such logic can execute on a commercially available processing platform or a special purpose device.
  • embodiments of the present invention can be designed in software using a hardware description language (HDL) such as, for example, Verilog or VHDL.
  • HDL hardware description language
  • the HDL-design can model the behavior of an electronic system, where the design can be synthesized and ultimately fabricated into a hardware device.
  • the HDL-design can be stored in a computer product and loaded into a computer system prior to hardware manufacture.

Abstract

Apparatus and systems utilizing multiple shader engines where each shader engine comprises multiple rows of shader engine filters combined with level one and level two cache systems. Each unified shader engine filter comprises a shader pipe array, and a texture mapping unit with access to a level one cache system and a level two cache. The shader pipe array accepts texture requests for a specified pixel from a resource and performs associated rendering calculations, outputting texel data. The texture mapping unit retrieves texel data stored in a level one cache system, with the ability to read and write to and from a level two cache system, and through formatting and bilinear filtering interpolations generates a formatted bilinear result based on the specific pixel's neighboring texels. Utilizing multiple rows of shader engine filters within a shader engine allows for the parallel processing of multiple simultaneous resource requests. Utilizing multiple shader engines allows for greater processing through the use of multiple simultaneous processing. A method utilizing multiple shader engines to perform texture mapping is also presented.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit under 35 U.S.C. §119(e) to U.S. Provisional Patent Application No. 61/057,499 filed May 30, 2008; U.S. Provisional Patent Application No. 61/057,483 filed May 30, 2008; U.S. Provisional Patent Application No. 61/057,492 filed May 30, 2008; U.S. Provisional Patent Application No. 61/057,504 filed May 30, 2008; and U.S. Provisional Patent Application No. 61/057,513 filed May 30, 2008, which are incorporated by reference herein in their entireties.
  • BACKGROUND
  • 1. Field of the Invention
  • The present invention is generally directed to computing operations performed in computing systems, and more particularly directed to graphics processing tasks performed in computing systems.
  • 2. Related Art
  • A graphics processing unit (GPU) is a complex integrated circuit that is specially designed to perform graphics processing tasks. A GPU can, for example, execute graphics processing tasks required by an end-user application, such as a video game application. In such an example, there are several layers of software between the end-user application and the GPU.
  • The end-user application communicates with an application programming interface (API). An API allows the end-user application to output graphics data and commands in a standardized format, rather than in a format that is dependent on the GPU. Several types of APIs are commercially available, including DirectX® developed by Microsoft Corp. and OpenGL® developed by Silicon Graphics, Inc. The API communicates with a driver. The driver translates standard code received from the API into a native format of instructions understood by the GPU. The driver is typically written by the manufacturer of the GPU. The GPU then executes the instructions from the driver.
  • A GPU produces the pixels that make up an image from a higher level description of its components in a process known as rendering. GPU's typically utilize a concept of continuous rendering by the use of pipelines to processes pixel, texture, and geometric data. These pipelines are often referred to as a collection of fixed function special purpose pipelines such as rasterizers, setup engines, color blenders, hieratical depth, texture mapping and programmable stages that can be accomplished in shader pipes or shader pipelines, “shader” being a term in computer graphics referring to a set of software instructions used by a graphic resource primarily to perform rendering effects. In addition, GPU's can also employ multiple pipelines in a parallel processing design to obtain higher throughput. A multiple of shader pipelines can also be referred to as a shader pipe array.
  • Manufacturing defects and subsequent failures somewhere within a pipeline can become apparent as a shader pipe array performs its ongoing rendering process. A small defect or failure in a system without any logic repair can be fatal and render the device defective. In addition, GPU's also support a concept known as texture filtering. Texture mapping is a method used to determine the texture color for a texture mapped pixel through the use of the colors of nearby pixels of the texture, or texels. The process is also referred to as texture smoothing or texture interpolation. However, high image quality texture mapping requires a high degree of computational complexity.
  • Texture filters rely on high speed access to local cache memory for pixel data. However, the use of dedicated local cache memory for texture filters typically precludes the use of more general purpose shared memory.
  • While general purpose shared memory is more flexible, it typically has slower response time and hence is less performant.
  • Given the ever increasing complexity of new software applications and API shader language advancements, the demands on GPU's to provide high quality rendering, texture mapping and generalize compute computation complexities are further increasing.
  • What are needed, therefore, are systems and/or methods to alleviate the aforementioned deficiencies. Particularly, what is needed is a multi-instance unified shader engine filter system with scalable parallel processing based design with the ability to access a multi-tier cache system as well as the ability to overcome the effects of defective shader pipes with minimal impact on overall system performance.
  • SUMMARY OF THE INVENTION
  • This section is for the purpose of summarizing some aspects of the present invention and to briefly introduce some preferred embodiments.
  • Simplifications or omissions may be made to avoid obscuring the purpose of the section. Such simplifications or omissions are not intended to limit the scope of the present invention. Consistent with the principles of the present invention as embodied and broadly described herein, the present invention includes method and apparatus for a multiple instance shader engine filtering system wherein where each shader engine comprises multiple rows of shader engine filters combined with level one and level two cache systems. Each unified shader engine filter row comprises a shader pipe array, a texture filter, with access to a level one cache system and a level two cache. The shader pipe array accepts instructions from an executing shader program, including input, output, ALU, and texture or general memory load/store requests with addresses data from register files in the shader pipes and program constants to generate the return texel or memory data based on state data controlling the pipelined address and filtering operations for a specific pixel, vertex, primitive, surface or general compute thread. The texture mapping unit retrieves texel data stored in a level one cache system, with the ability to read and write to and from a level two cache system, and through formatting and bilinear filtering interpolations generates a formatted bilinear result based on the specific pixel's neighboring texels. Utilizing multiple rows of shader engine filters within a shader engine allows for the parallel processing of multiple simultaneous resource requests. Utilizing multiple shader engines allows for greater processing through the use of multiple simultaneous processing.
  • In an embodiment of the invention, each unified shader engine filter row, within each shader engine, further comprises a redundant shader pipe system. The redundant shader pipe system is configured to process shader pipe data originally destined to a defective shader pipe in the shader pipe array of the same row. In this embodiment a redundant shader switch transfers shader pipe data originally destined to a defective shader pipe to the redundant shader pipe system for processing. In addition, the redundant shader switch places the processed shader pipe data at the correct column of output data at the appropriate time. An error within the shader pipe array can be static or intermittent, and could be caused, for example, because of a manufacturing or post-manufacturing defect, component degradation, external interference, and/or inadvertent static discharge, or other electrical or environmental condition or occurrence. The redundant shader pipe enables the recovery of devices with a defective sub-circuit.
  • In another embodiment, each texture mapping unit within a unified shader engine filter row, in each shader engine, further comprises a pre-formatter module, an interpolator module, an accumulator module, and a format module. The pre-formatter module is configured to receive texel data and convert it to a normalized fixed point format. The interpolator module is configured to perform an interpolation on the normalized fixed point texel data from the pre-formatter module and generate re-normalized floating point texel data. The accumulator module is configured to accumulate floating point texel data from the interpolator module to achieve the desired level of bilinear, trilinear, and anisoptropic filtering. The format module is configured to convert texel data from the accumulator module into a standard floating point representation.
  • In another embodiment of the invention, a level one cache system is configured with dual access so that two shader engine filters have access to a single level one cache system.
  • In another embodiment more than one level two cache systems are configured to be accessible by other resources.
  • In another embodiment the communication between a level one cache system and a level two cache systems utilizes more than one memory channel thereby resulting in a greater data throughput.
  • In another embodiment one or more level one cache systems can allocate defined areas of memory to be shared amongst other resources, including other level one cache systems. In certain instances this approach will allow for quicker fetch times of texel data where the required data has already been moved from a level two cache system to a level one cache system.
  • Further features and advantages of the invention, as well as the structure and operation of various embodiments of the present invention, are described in detail below with reference to the accompanying drawings. It is noted that the invention is not limited to the specific embodiments described herein. Such embodiments are presented herein for illustrative purposes only. Additional embodiments will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and constitute part of the specification, illustrate embodiments of the invention and, together with the general description given above and the detailed description of the embodiment given below, serve to explain the principles of the present invention. In the drawings:
  • FIG. 1 is a system diagram depicting an implementation of single shader engine filtering system with a single unified shader engine filter row.
  • FIG. 2 is a system diagram depicting an implementation of a single unified shader engine filtering system with multiple rows of shader engine filters, and a level one and level two cache system.
  • FIG. 3 is a system diagram depicting an implementation of a single unified shader engine filtering system with a dual shader engines, dual ported level one cache system, and a level two cache system.
  • FIG. 4 is a system diagram depicting an implementation of a quad unified shader engine filtering system with quad shader engines, dual ported level one cache and a level two cache system.
  • FIG. 5 is a flowchart depicting an implementation of a method for a scalable shader engine filtering system.
  • Features and advantages of the present invention will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.
  • DETAILED DESCRIPTION
  • The invention will be better understood from the following descriptions of various “embodiments” of the invention. Thus, specific “embodiments” are views of the invention, but each is not the whole invention. The present invention relates to a multiple instance shader engine filtering system wherein where each shader engine comprises multiple rows of shader engine filters combined with level one and level two cache systems. Each unified shader engine filter row accepts texture requests for a specified pixel from a resource and performs rendering and texture calculations, outputting texel data. In embodiments of this invention, bilinear texture filtering, trilinear texture filtering, and anisotropic texture mapping are applied to texel data stored in a multi level cache system. In another embodiment, a redundant shader system can be added and configured to each unified shader engine filter row to effectively repair defective shader pipes within the shader pipe array of the same row. Additionally, a unified shader module can be reserved as a redundant subsystem and a data destined for a defective unified shader model can be sent to the redundant unified shader module. This increases the portion of the device that is covered by repair significantly due to the inclusion of texture mapping unit and L1 cache system and thus significantly improves on the yield of such device.
  • While specific configurations, arrangements, and steps are discussed, it should be understood that this is done for illustrative purposes only. A person skilled in the pertinent art(s) will recognize that other configurations, arrangements, and steps can be used without departing from the spirit and scope of the present invention. It will be apparent to a person skilled in the pertinent art(s) that this invention can also be employed in a variety of other applications.
  • References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described can include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of one skilled in the art to incorporate such a feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • While the present invention is described herein with reference to illustrative embodiments for particular applications, it should be understood that the invention is not limited thereto. Those skilled in the art with access to the teachings provided herein will recognize additional modifications, applications, and embodiments within the scope thereof and additional fields in which the invention would be of significant utility.
  • FIG. 1 is an illustration of a single shader engine filtering system 100 according to an embodiment of the present invention. System 100 comprises a single unified shader engine filter row 110, sequencer 130, level one cache system 120, and level two cache system 140. Unified shader engine filter row 110 comprises a shader pipe array 112, redundant shader pipe array 114, texture mapping unit 116, and address generator module 118.
  • Shader pipe array 112 performs rendering calculation on input data. Sequencer 130 controls the flow of data through shader pipe array 112. In addition, in an embodiment where the redundant shader pipe array 114 is present, sequencer 130 identifies defective shader pipes that occur within shader pipe array 112.
  • However, in the event that sequencer 130 detects a defective shader pipe in shader pipe array 112, and the redundant shader pipe array 114 is present, the shader pipe data originally destined to a defective shader pipe is transferred, via a direct horizontal fetch path, from shader pipe array 112 to the redundant shader pipe array 114. Redundant shader pipe array 114 is responsible to process the shader pipe data originally destined to the defective shader. Once the shader pipe data is processed, it is returned to the output stream of shader pipe array 112.
  • Shader pipe array 112 can also issue a texture request to texture mapping unit 116. In this instance texture mapping unit 116 generates appropriate addresses to the level one cache system 120 that contains texel data associated with pixels. The level one cache system 120, after receiving an address, will return the associated texel data to texture mapping unit 116. However, in the event that level one cache system 120 does not have the required texel data, a request is sent to the level two cache system 140 in order to retrieve the required texel data.
  • Upon receipt of texel data, texture mapping unit 116, through formatting and bilinear filtering interpolations, generates a formatted bilinear result based on the specific pixel's neighboring texels. Texture mapping unit 116 comprises pre-formatter module 116-A, interpolator module 116-B, accumulator module 116-C, and format module 116-D. The texture mapping unit 116 receives a request from shader arrays 112 and 114 and sequencer 130 respectively, and processes the instruction in the address generator 118 to determine the actual addresses to service. Resulting texel data received from the level one cache system 120 and, after the data is processed in pre-formatter 116-A, interpolator module 116-B, accumulator module 116-C, and format module 116-D, is sent back to the requesting resource in shader pipe array, 112 and/or redundant shader pipe array 114. Pre-formatter module 116-A is configured to receive texel data from pre-formatted 116-A and perform one or more interpolations, each of which are accumulated in accumulator module 116-C, to achieve the desired level of bilinear, trilinear, and anisotropic texture filtering. Format module 116-D converts the accumulated texel data in accumulator module 116-C to a standard floating point representation for the requesting resource, shader pipe array 112.
  • FIG. 2 is an illustration of a single shader engine filtering system 200 with multiple rows of shader engine filters, distributed level one cache systems, and a centralized level two cache system according to an embodiment of the present invention. In this embodiment, shader engine filtering system 200 comprises a single shader engine 230, a redundant shader switch shown as redundant shader switch input (RSS-In) 210 and a redundant shader switch output (RSS-Out) 220, a sequencer 130, and a level two cache system represented as 140-1 through 140-M, where M is a positive integer.
  • Shader engine 230 comprises one or more shader filter rows represented as 110-1 through 110-N, and associated level one cache systems represented as 120-1 through 120-N, where N is a positive integer that is not necessarily equal to M. In this embodiment, while each unified shader engine filter has an associated level one cache in a distributed design, the level two cache system is designed as a centralized system given that the level two cache system 140 is not specifically associated with any particular unified shader engine filter row, but rather, can be accessed by any level one cache system 120-1 through 120-N via wide channel memory bus 240. While the embodiment shown in FIG. 2 illustrates the level two cache system as comprising more than one block, illustrated by 140-1 through 140-M, the level two cache system could comprise a single system.
  • RSS-In 210 controls the flow of input data to shader pipe array 1, 112-1. Sequencer 130 controls the flow of data through the shader pipe arrays 112-1 to 112-N, as well as identifying defective shader pipes that occur within shader pipe array 1 through shader pipe array N, 112-1 through 112-N. In the event that there are no defective shader pipes, the processed data continues through RSS-Out 220.
  • However, in the event that the unified shader engine filtering system is notified of a defect state, the data provided by sequencer 130 replaces a defective shader pipe in a respective unified shader module detects a defective shader pipe in a respective shader pipe arrays 1 through N, 112-1 through 112-N, sequencer 130 replaces a defective shader pipe by notifying RSS-In 210 of the location of the defective shader pipe. RSS-In 210 transfers the shader pipe data that was destined to a defective shader pipe via a direct horizontal data path from the shader pipe array to the associated redundant shader pipe array.
  • As an example, if sequencer 130 detects a defective shader pipe in shader pipe array 2, 112-2, then RSS-In 210 will transfer the shader pipe data that was destined to the defective shader pipe from shader pipe array 2, 112-2, to redundant shader pipe system 2, 114-2. Redundant shader pipe array 2, 114-2, is responsible to process the shader pipe data. Once the shader pipe data is processed it is returned to RSS-Out 220, which places the processed shader pipe data at the correct location and at the proper time as it would have been if the shader pipe had not been found to be defective.
  • FIG. 3 is an illustration of a dual shader engine filtering system 300 with multiple rows of shader engine filters in each shader engine, dual ported level one cache system, and a centralized level two cache system according to an embodiment of the present invention. In this embodiment, dual shader engine filtering system 300 comprises two single shader engines, shader engine 230-1 and shader engine 230-1, a dual ported level one cache system 310, and distributed level two cache system 140 connected to level one cache system 310 via wide channel memory bus 240.
  • Each shader engine comprises multiple rows of shader engine filters as more fully described in FIG. 2, but are illustrated in FIG. 3 within a single block. Shader pipe arrays 1 through N for shader engine 230-1 are illustrated by block 312-1, labeled shader pipe arrays 1L-NL, where N is a positive integer and “L” indicates the “left” shader engine. In a similar manner, redundant shader pipe systems 1L-NL, 314-1, and texture mapping unit 1L-NL, 316-1, are both represented as an array of N elements. However, functionally, shader engine 230-1 operates as the shader engine in FIG. 2 is described. The second shader 230-2, on the right of FIG. 3 is similarly configured and described.
  • In the embodiment described in FIG. 3, each shader engine is configured with its own redundant shader switch and sequencer. However, in another embodiment a single sequencer and redundant shader switch could be configured to accomplish essentially equivalent results.
  • In the embodiment described in FIG. 3, a dual-ported level one cache system 310 is configured. However, in another embodiment a dedicated level one cache system could be configured for each shader engine.
  • FIG. 4 illustrates the scalability of the design and is an illustration of a quad shader engine filtering system 400 with multiple rows of shader engine filters in each shader engine, two dual ported level one cache systems and two centralized level two cache systems according to an embodiment of the present invention. In this embodiment, quad shader engine filtering system 400 comprises four single shader engines, shader engine 230-1, shader engine 230-2, shader engine 230-3, and shader engine 230-4.
  • Each shader engine comprises shader pipe arrays (312-1, 312-2, 312-3, 312-4), redundant shader pipe systems (314-1, 314-2, 314-3, 314-4), texture filters (316-1, 316-2, 316-3, 316-4). The upper shader engines, 230-1 and 230-2, are configured to support dual ported level one cache system 310-1 while the lower shader engines, 230-3 and 230-4, are configured to support dual ported level one cache system 310-2. In addition, in the embodiment shown in FIG. 4, the upper level one cache system 310-1 has direct access to level two cache system 410-1 through 410-M, and the lower level one cache system 310-2 has direct access to level two cache system 412-1 through 412-M. In addition, the two level one cache systems, 310-1 and 310-2, are also interconnected.
  • FIG. 4 illustrates the functionality of the redundant shader switch in a single block, here shown as redundant shader switch In/Out 420-1 supporting shader engine 230-1, redundant shader switch In/Out 420-2 supporting shader engine 230-2, redundant shader switch In/Out 422-1 supporting shader engine 230-3, and redundant shader switch In/Out 422-2 supporting shader engine 230-4. The operation and functionality of the redundant shader switch illustrated in FIG. 4 is essentially equivalent to that described in FIG. 2 and FIG. 3. In addition, the functionality of each shader engine is as described in the previous figures.
  • The embodiment shown in FIG. 4 illustrates sequencer 130-1 supporting both shader engines 230-1 and 230-3 and sequencer 130-2 supporting both shader engines 230-2 and 230-4. In other embodiments a single sequencer could support all of the shader engines. In yet another embodiment, each shader engine could be supported by its own sequencer. In a similar manner, the embodiment shown in FIG. 4 illustrating the redundant shader switch could also be accomplished with a single redundant shader switch, or with dedicated shader switches for each shader engine.
  • FIG. 5 is a flowchart depicting a method 500 for scalable shader engine filtering. Method 500 begins at step 502. In step 504, multiple texture fetch instructions can be received by multiple unified shader engines in parallel. In step 506, the texture requests are allocated amongst shader engines. Allocation of the texture requests can be done using a plurality of methods, including for example, load balancing, and/or prioritization. In step 508, each shader engine generates a source set of addresses based on the shader program instructions for a specified set of pixels, vertices, primitives, surfaces, or compute work items. In step 510, each shader engine, in parallel, can retrieve texel data from a level one cache system. In step 512, each unified shader engine calculates a formatted interpolation for each set of texel data using a texture filter. Method 500 concludes at step 512.
  • The functions, processes, systems, and methods outlined in FIGS. 1, 2, 3, 4, and 5 can be implemented in software, firmware, or hardware, or using any combination thereof. If programmable logic is used, such logic can execute on a commercially available processing platform or a special purpose device.
  • As would be apparent to one skilled in the relevant art, based on the description herein, embodiments of the present invention can be designed in software using a hardware description language (HDL) such as, for example, Verilog or VHDL. The HDL-design can model the behavior of an electronic system, where the design can be synthesized and ultimately fabricated into a hardware device. In addition, the HDL-design can be stored in a computer product and loaded into a computer system prior to hardware manufacture.
  • It is to be appreciated that the Detailed Description section, and not the Summary and Abstract sections, is intended to be used to interpret the claims. The Summary and Abstract sections can set forth one or more but not all exemplary embodiments of the present invention as contemplated by the inventor(s), and thus, are not intended to limit the present invention and the appended claims in any way.
  • The present invention has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
  • The foregoing description of the specific embodiments will so fully reveal the general nature of the invention that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present invention. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
  • While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims (28)

1. A scalable shader engine filtering system, comprising:
a shader engine;
a level one cache system;
a level two cache system; and
a sequencer,
wherein the shader engine is configured to accept texture requests, under the control of the sequencer, from one or more resources and generate a formatted interpolation based on texel data stored in the level one cache system, the level one cache system being configured to communicate with the level two cache system via a wide channel memory bus.
2. The scalable shader engine filtering system of claim 1, further comprising one or more shader engines.
3. The scalable shader engine filtering system of claim 2, wherein the shader engine comprises:
one or more engine rows, each engine row comprising:
a shader pipe array configured to accept a texture request from a resource whereby output texel data is generated based on rendering calculations for a specified pixel; and
a texture mapping unit configured to accept the texel data and generate a formatted interpolation.
4. The scalable shader engine filtering system of claim 1, wherein the level one cache system is configurable to be shared amongst other resources.
5. The scalable shader engine filtering system of claim 3, wherein each engine row further comprises a redundant shader pipe system configured to process shader pipe data destined to a defective shader pipe.
6. The scalable shader engine filtering system of claim 5, further comprising a redundant shader switch configured to control the switching of data input and output signals to the redundant shader pipe system when a defective shader pipe within the shader pipe array is identified by the sequencer.
7. The scalable shader engine filtering system of claim 2, wherein a shader pipe block is configured to contain one or more shader pipes.
8. The scalable shader engine filtering system of claim 1, wherein the level one cache system comprises one or more level one cache blocks where each level one cache block is associated with a shader pipe texture mapping unit whereby the shader pipe texture mapping unit is configured to read and write data to and from the level one cache block.
9. The scalable shader engine filtering system of claim 3, wherein the texture mapping unit comprises:
a pre-formatter module configured to accept texel data and generate normalized fixed point texel data;
an interpolator module configured to perform an interpolation on the normalized fixed point texel data from the pre-formatter module and generate re-normalized floating point texel data;
an accumulator module configured to accumulate floating point texel data from the interpolator module; and
a format module configured to convert texel data from the accumulator module into a standard floating point representation.
10. The scalable shader engine filtering system of claim 9, wherein the interpolator module is configured to perform one or more interpolations in order to achieve at least one of:
a bilinear texture filtering;
a trilinear texture filtering; and
an anisotropic texture filtering.
11. A method for scalable shader engine filtering, comprising:
receiving texture requests from a plurality of resources in parallel for a plurality of specified pixels;
allocating the texture requests amongst a plurality of shader engines;
generating a set of output texel data for each specified pixel;
retrieving texel data; and
calculating a formatted interpolation for each set of texel data in parallel.
12. The scalable shader engine filtering method of claim 11, further comprising:
processing shader pipe data destined to one or more defective shader pipes.
13. The scalable shader engine filtering method of claim 11, further comprising:
utilizing multi-level caching algorithms to increase performance.
14. The scalable shader engine filtering method of claim 11, further comprising:
receiving floating point texel data;
generating normalized fixed point texel data from the floating point texel data;
performing an interpolation on the normalized fixed point texel data;
generating re-normalized floating point texel data;
accumulating re-normalized texel data; and
formatting the accumulated re-normalized texel data into a standard floating point representation.
15. The scalable shader engine filtering method of claim 14, wherein interpolation calculating comprises:
filtering using a bilinear texture filter;
filtering using a trilinear texture filter; and
filtering using an anisotropic texture filter.
16. The scalable shader engine filtering method of claim 11, wherein the method is performed by synthesizing hardware description language instructions.
17. A system for scalable shader engine filtering, comprising:
a processor; and
a memory in communication with said processor, said memory for storing a plurality of processing instructions for directing said processor to:
receive texture requests from a plurality of resources for a plurality of specified pixels;
allocate the texture requests amongst a plurality of shader engines;
generate a set of output texel data for each specified pixel based on rendering calculations;
retrieve texel data from a level one cache system; and
calculate a formatted interpolation for each set of texel data in parallel utilizing a texture filter.
18. A system according to claim 17 further comprising instructions for causing said processor to:
process shader pipe data destined to one or more defective shader pipes.
19. A system according to claim 17 further comprising instructions for causing said processor to:
read and write to a level two cache system from the level one cache system.
20. A system according to claim 17 further comprising instructions for causing said processor to:
receive floating point texel data;
generate normalized fixed point texel data from the floating point texel data;
perform an interpolation on the normalized fixed point texel data;
generate re-normalized floating point texel data;
accumulate re-normalized texel data; and
format the accumulated re-normalized texel data into a standard floating point representation.
21. A system according to claim 17 further comprising instructions for causing said processor to:
perform a bilinear texture filter;
perform a trilinear texture filter; and
perform an anisotropic texture filter.
22. A system for scalable shader engine filtering, comprising:
means for receiving texture requests from a plurality of resources for a plurality of specified pixels;
means for allocating the texture requests amongst a plurality of shader engines;
means for generating a set of output texel data for each specified pixel based on rendering calculations;
means for retrieving texel data from a level one cache system; and
means for calculating a formatted interpolation for each set of texel data in parallel utilizing a texture filter.
23. A system according to claim 23, further comprising:
means for processing shader pipe data destined to one or more defective shader pipes.
24. A system according to claim 23, further comprising:
means for reading and writing to and from a level two cache system from the level one cache system.
25. A system according to claim 23, further comprising:
means for receiving floating point texel data;
means for generating normalized fixed point texel data from the floating point texel data;
means for performing an interpolation on the normalized fixed point texel data;
means for generating re-normalized floating point texel data;
means for accumulating re-normalized texel data; and
means for formatting the accumulated re-normalized texel data into a standard floating point representation.
26. A system according to claim 23, further comprising:
means for performing a bilinear texture filter;
means for performing a trilinear texture filter; and
means for performing an anisotropic texture filter.
27. A computer readable medium carrying one or more sequences of one or more instructions for execution by one or more processor-based computing systems which when executed causes the computer systems to perform a method for scalable shader complex filtering, comprising:
receiving texture requests from a plurality of resources for a plurality of specified pixels;
allocating the texture requests amongst a plurality of shader engines;
generating a set of output texel data for each specified pixel based on rendering calculations;
retrieving texel data from a level one cache system; and
calculating a formatted interpolation for each set of texel data in parallel utilizing a texture filter.
28. The computer readable medium according to claim 28 wherein the method for scalable shader complex filtering further comprises:
receiving floating point texel data;
generating normalized fixed point texel data from the floating point texel data;
performing an interpolation on the normalized fixed point texel data;
generating re-normalized floating point texel data;
accumulating re-normalized texel data; and
formatting the accumulated re-normalized texel data into a standard floating point representation.
US12/476,202 2008-05-30 2009-06-01 Multi Instance Unified Shader Engine Filtering System With Level One and Level Two Cache Abandoned US20090309896A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/476,202 US20090309896A1 (en) 2008-05-30 2009-06-01 Multi Instance Unified Shader Engine Filtering System With Level One and Level Two Cache

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US5751308P 2008-05-30 2008-05-30
US5750408P 2008-05-30 2008-05-30
US5749908P 2008-05-30 2008-05-30
US5748308P 2008-05-30 2008-05-30
US5749208P 2008-05-30 2008-05-30
US12/476,202 US20090309896A1 (en) 2008-05-30 2009-06-01 Multi Instance Unified Shader Engine Filtering System With Level One and Level Two Cache

Publications (1)

Publication Number Publication Date
US20090309896A1 true US20090309896A1 (en) 2009-12-17

Family

ID=41379236

Family Applications (9)

Application Number Title Priority Date Filing Date
US12/476,159 Active 2030-06-28 US8195882B2 (en) 2008-05-30 2009-06-01 Shader complex with distributed level one cache system and centralized level two cache
US12/476,161 Active 2031-09-30 US8558836B2 (en) 2008-05-30 2009-06-01 Scalable and unified compute system
US12/476,158 Active 2031-06-27 US9093040B2 (en) 2008-05-30 2009-06-01 Redundancy method and apparatus for shader column repair
US12/476,202 Abandoned US20090309896A1 (en) 2008-05-30 2009-06-01 Multi Instance Unified Shader Engine Filtering System With Level One and Level Two Cache
US12/476,152 Abandoned US20090315909A1 (en) 2008-05-30 2009-06-01 Unified Shader Engine Filtering System
US14/808,113 Active US9367891B2 (en) 2008-05-30 2015-07-24 Redundancy method and apparatus for shader column repair
US15/156,658 Active US10861122B2 (en) 2008-05-30 2016-05-17 Redundancy method and apparatus for shader column repair
US17/113,827 Active US11386520B2 (en) 2008-05-30 2020-12-07 Redundancy method and apparatus for shader column repair
US17/862,096 Active US11948223B2 (en) 2008-05-30 2022-07-11 Redundancy method and apparatus for shader column repair

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US12/476,159 Active 2030-06-28 US8195882B2 (en) 2008-05-30 2009-06-01 Shader complex with distributed level one cache system and centralized level two cache
US12/476,161 Active 2031-09-30 US8558836B2 (en) 2008-05-30 2009-06-01 Scalable and unified compute system
US12/476,158 Active 2031-06-27 US9093040B2 (en) 2008-05-30 2009-06-01 Redundancy method and apparatus for shader column repair

Family Applications After (5)

Application Number Title Priority Date Filing Date
US12/476,152 Abandoned US20090315909A1 (en) 2008-05-30 2009-06-01 Unified Shader Engine Filtering System
US14/808,113 Active US9367891B2 (en) 2008-05-30 2015-07-24 Redundancy method and apparatus for shader column repair
US15/156,658 Active US10861122B2 (en) 2008-05-30 2016-05-17 Redundancy method and apparatus for shader column repair
US17/113,827 Active US11386520B2 (en) 2008-05-30 2020-12-07 Redundancy method and apparatus for shader column repair
US17/862,096 Active US11948223B2 (en) 2008-05-30 2022-07-11 Redundancy method and apparatus for shader column repair

Country Status (1)

Country Link
US (9) US8195882B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090295819A1 (en) * 2008-05-30 2009-12-03 Advanced Micro Devices, Inc. Floating Point Texture Filtering Using Unsigned Linear Interpolators and Block Normalizations
US20090295821A1 (en) * 2008-05-30 2009-12-03 Advanced Micro Devices, Inc. Scalable and Unified Compute System
US20110050716A1 (en) * 2009-09-03 2011-03-03 Advanced Micro Devices, Inc. Processing Unit with a Plurality of Shader Engines

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012147364A1 (en) * 2011-04-28 2012-11-01 Digital Media Professionals Inc. Heterogeneous graphics processor and configuration method thereof
US9508185B2 (en) 2011-05-02 2016-11-29 Sony Interactive Entertainment Inc. Texturing in graphics hardware
US9569880B2 (en) * 2013-12-24 2017-02-14 Intel Corporation Adaptive anisotropic filtering
CN103955407B (en) * 2014-04-24 2018-09-25 深圳中微电科技有限公司 Reduce the method and device of texture delay in the processor
KR20160071769A (en) 2014-12-12 2016-06-22 삼성전자주식회사 Semiconductor memory device and memory system including the same
US10445852B2 (en) 2016-12-22 2019-10-15 Apple Inc. Local image blocks for graphics processing
US10324844B2 (en) 2016-12-22 2019-06-18 Apple Inc. Memory consistency in graphics memory hierarchy with relaxed ordering
US10504270B2 (en) 2016-12-22 2019-12-10 Apple Inc. Resource synchronization for graphics processing
US10223822B2 (en) 2016-12-22 2019-03-05 Apple Inc. Mid-render compute for graphics processing

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5224208A (en) * 1990-03-16 1993-06-29 Hewlett-Packard Company Gradient calculation for texture mapping
US6104415A (en) * 1998-03-26 2000-08-15 Silicon Graphics, Inc. Method for accelerating minified textured cache access
US20040189652A1 (en) * 2003-03-31 2004-09-30 Emberling Brian D. Optimized cache structure for multi-texturing
US6897871B1 (en) * 2003-11-20 2005-05-24 Ati Technologies Inc. Graphics processing architecture employing a unified shader
US20060028482A1 (en) * 2004-08-04 2006-02-09 Nvidia Corporation Filtering unit for floating-point texture data
US20060250409A1 (en) * 2005-04-08 2006-11-09 Yosuke Bando Image rendering method and image rendering apparatus using anisotropic texture mapping
US7136068B1 (en) * 1998-04-07 2006-11-14 Nvidia Corporation Texture cache for a computer graphics accelerator
US7164426B1 (en) * 1998-08-20 2007-01-16 Apple Computer, Inc. Method and apparatus for generating texture
US20070211070A1 (en) * 2006-03-13 2007-09-13 Sony Computer Entertainment Inc. Texture unit for multi processor environment
US7330188B1 (en) * 1999-03-22 2008-02-12 Nvidia Corp Texture caching arrangement for a computer graphics accelerator
US20080094408A1 (en) * 2006-10-24 2008-04-24 Xiaoqin Yin System and Method for Geometry Graphics Processing
US20080094405A1 (en) * 2004-04-12 2008-04-24 Bastos Rui M Scalable shader architecture
US20080094407A1 (en) * 2006-06-20 2008-04-24 Via Technologies, Inc. Systems and Methods for Storing Texture Map Data
US20080284786A1 (en) * 1998-06-16 2008-11-20 Silicon Graphics, Inc. Display System Having Floating Point Rasterization and Floating Point Framebuffering
US20090295821A1 (en) * 2008-05-30 2009-12-03 Advanced Micro Devices, Inc. Scalable and Unified Compute System
US20090295819A1 (en) * 2008-05-30 2009-12-03 Advanced Micro Devices, Inc. Floating Point Texture Filtering Using Unsigned Linear Interpolators and Block Normalizations
US7936359B2 (en) * 2006-03-13 2011-05-03 Intel Corporation Reconfigurable floating point filter

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5218680A (en) 1990-03-15 1993-06-08 International Business Machines Corporation Data link controller with autonomous in tandem pipeline circuit elements relative to network channels for transferring multitasking data in cyclically recurrent time slots
AU700629B2 (en) * 1994-03-22 1999-01-07 Hyperchip Inc. Efficient direct cell replacement fault tolerant architecture supporting completely integrated systems with means for direct communication with system operator
US5793371A (en) * 1995-08-04 1998-08-11 Sun Microsystems, Inc. Method and apparatus for geometric compression of three-dimensional graphics data
US5864342A (en) * 1995-08-04 1999-01-26 Microsoft Corporation Method and system for rendering graphical objects to image chunks
JP3645024B2 (en) 1996-02-06 2005-05-11 株式会社ソニー・コンピュータエンタテインメント Drawing apparatus and drawing method
US6021511A (en) * 1996-02-29 2000-02-01 Matsushita Electric Industrial Co., Ltd. Processor
DE19861088A1 (en) 1997-12-22 2000-02-10 Pact Inf Tech Gmbh Repairing integrated circuits by replacing subassemblies with substitutes
US6853385B1 (en) * 1999-11-09 2005-02-08 Broadcom Corporation Video, audio and graphics decode, composite and display system
US6785840B1 (en) * 1999-08-31 2004-08-31 Nortel Networks Limited Call processor system and methods
US9668011B2 (en) 2001-02-05 2017-05-30 Avago Technologies General Ip (Singapore) Pte. Ltd. Single chip set-top box system
AU2001243463A1 (en) * 2000-03-10 2001-09-24 Arc International Plc Memory interface and method of interfacing between functional entities
US6731303B1 (en) * 2000-06-15 2004-05-04 International Business Machines Corporation Hardware perspective correction of pixel coordinates and texture coordinates
KR100448709B1 (en) 2001-11-29 2004-09-13 삼성전자주식회사 Data bus system and method for controlling the same
GB2417586B (en) 2002-07-19 2007-03-28 Picochip Designs Ltd Processor array
US7352374B2 (en) * 2003-04-07 2008-04-01 Clairvoyante, Inc Image data set with embedded pre-subpixel rendered image
US7124318B2 (en) * 2003-09-18 2006-10-17 International Business Machines Corporation Multiple parallel pipeline processor having self-repairing capability
US7245302B1 (en) * 2003-10-30 2007-07-17 Nvidia Corporation Processing high numbers of independent textures in a 3-D graphics pipeline
US7577869B2 (en) * 2004-08-11 2009-08-18 Ati Technologies Ulc Apparatus with redundant circuitry and method therefor
US7460126B2 (en) * 2004-08-24 2008-12-02 Silicon Graphics, Inc. Scalable method and system for streaming high-resolution media
WO2006039711A1 (en) * 2004-10-01 2006-04-13 Lockheed Martin Corporation Service layer architecture for memory access system and method
US7280107B2 (en) 2005-06-29 2007-10-09 Microsoft Corporation Procedural graphics architectures and techniques
WO2007049610A1 (en) 2005-10-25 2007-05-03 Mitsubishi Electric Corporation Image processor
US8933933B2 (en) 2006-05-08 2015-01-13 Nvidia Corporation Optimizing a graphics rendering pipeline using early Z-mode
US7928990B2 (en) * 2006-09-27 2011-04-19 Qualcomm Incorporated Graphics processing unit with unified vertex cache and shader register file
US7999821B1 (en) * 2006-12-19 2011-08-16 Nvidia Corporation Reconfigurable dual texture pipeline with shared texture cache
US8274520B2 (en) * 2007-06-08 2012-09-25 Apple Inc. Facilitating caching in an image-processing system

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5224208A (en) * 1990-03-16 1993-06-29 Hewlett-Packard Company Gradient calculation for texture mapping
US6104415A (en) * 1998-03-26 2000-08-15 Silicon Graphics, Inc. Method for accelerating minified textured cache access
US7136068B1 (en) * 1998-04-07 2006-11-14 Nvidia Corporation Texture cache for a computer graphics accelerator
US20080284786A1 (en) * 1998-06-16 2008-11-20 Silicon Graphics, Inc. Display System Having Floating Point Rasterization and Floating Point Framebuffering
US7164426B1 (en) * 1998-08-20 2007-01-16 Apple Computer, Inc. Method and apparatus for generating texture
US7330188B1 (en) * 1999-03-22 2008-02-12 Nvidia Corp Texture caching arrangement for a computer graphics accelerator
US20040189652A1 (en) * 2003-03-31 2004-09-30 Emberling Brian D. Optimized cache structure for multi-texturing
US6897871B1 (en) * 2003-11-20 2005-05-24 Ati Technologies Inc. Graphics processing architecture employing a unified shader
US20080094405A1 (en) * 2004-04-12 2008-04-24 Bastos Rui M Scalable shader architecture
US20060028482A1 (en) * 2004-08-04 2006-02-09 Nvidia Corporation Filtering unit for floating-point texture data
US20060250409A1 (en) * 2005-04-08 2006-11-09 Yosuke Bando Image rendering method and image rendering apparatus using anisotropic texture mapping
US20070211070A1 (en) * 2006-03-13 2007-09-13 Sony Computer Entertainment Inc. Texture unit for multi processor environment
US7936359B2 (en) * 2006-03-13 2011-05-03 Intel Corporation Reconfigurable floating point filter
US20080094407A1 (en) * 2006-06-20 2008-04-24 Via Technologies, Inc. Systems and Methods for Storing Texture Map Data
US20080094408A1 (en) * 2006-10-24 2008-04-24 Xiaoqin Yin System and Method for Geometry Graphics Processing
US20090295821A1 (en) * 2008-05-30 2009-12-03 Advanced Micro Devices, Inc. Scalable and Unified Compute System
US20090295819A1 (en) * 2008-05-30 2009-12-03 Advanced Micro Devices, Inc. Floating Point Texture Filtering Using Unsigned Linear Interpolators and Block Normalizations
US20090315909A1 (en) * 2008-05-30 2009-12-24 Advanced Micro Devices, Inc. Unified Shader Engine Filtering System

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090295819A1 (en) * 2008-05-30 2009-12-03 Advanced Micro Devices, Inc. Floating Point Texture Filtering Using Unsigned Linear Interpolators and Block Normalizations
US20090295821A1 (en) * 2008-05-30 2009-12-03 Advanced Micro Devices, Inc. Scalable and Unified Compute System
US20090315909A1 (en) * 2008-05-30 2009-12-24 Advanced Micro Devices, Inc. Unified Shader Engine Filtering System
US8502832B2 (en) 2008-05-30 2013-08-06 Advanced Micro Devices, Inc. Floating point texture filtering using unsigned linear interpolators and block normalizations
US8558836B2 (en) 2008-05-30 2013-10-15 Advanced Micro Devices, Inc. Scalable and unified compute system
US20110050716A1 (en) * 2009-09-03 2011-03-03 Advanced Micro Devices, Inc. Processing Unit with a Plurality of Shader Engines
US9142057B2 (en) * 2009-09-03 2015-09-22 Advanced Micro Devices, Inc. Processing unit with a plurality of shader engines

Also Published As

Publication number Publication date
US8558836B2 (en) 2013-10-15
US20090315909A1 (en) 2009-12-24
US20210090208A1 (en) 2021-03-25
US9367891B2 (en) 2016-06-14
US20220343456A1 (en) 2022-10-27
US20090295820A1 (en) 2009-12-03
US20150332427A1 (en) 2015-11-19
US20100146211A1 (en) 2010-06-10
US11948223B2 (en) 2024-04-02
US8195882B2 (en) 2012-06-05
US9093040B2 (en) 2015-07-28
US11386520B2 (en) 2022-07-12
US20090295821A1 (en) 2009-12-03
US10861122B2 (en) 2020-12-08
US20160260192A1 (en) 2016-09-08

Similar Documents

Publication Publication Date Title
US20090309896A1 (en) Multi Instance Unified Shader Engine Filtering System With Level One and Level Two Cache
US7999819B2 (en) Systems and methods for managing texture descriptors in a shared texture engine
US10269090B2 (en) Rendering to multi-resolution hierarchies
US9245496B2 (en) Multi-mode memory access techniques for performing graphics processing unit-based memory transfer operations
US9734548B2 (en) Caching of adaptively sized cache tiles in a unified L2 cache with surface compression
US20120017062A1 (en) Data Processing Using On-Chip Memory In Multiple Processing Units
EP2596471B1 (en) Split storage of anti-aliased samples
KR20190100194A (en) Forbidden Rendering in Tiled Architectures
JP2008305408A (en) Extrapolation of nonresident mipmap data using resident mipmap data
US8928679B2 (en) Work distribution for higher primitive rates
KR20090079241A (en) Graphics processing unit with shared arithmetic logic unit
EP2297723A1 (en) Scalable and unified compute system
US20230269391A1 (en) Adaptive Pixel Sampling Order for Temporally Dense Rendering
US7979683B1 (en) Multiple simultaneous context architecture
US8478946B2 (en) Method and system for local data sharing
US20120013629A1 (en) Reading Compressed Anti-Aliased Images
US10019776B2 (en) Techniques for maintaining atomicity and ordering for pixel shader operations
WO2009145919A1 (en) Shader complex with distributed level one cache system and centralized level two cache
US11755336B2 (en) Distributed geometry

Legal Events

Date Code Title Description
AS Assignment

Owner name: ADVANCED MICRO DEVICES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DELAURIER, ANTHONY P.;LEATHER, MARK;HARTOG, ROBERT S.;AND OTHERS;SIGNING DATES FROM 20090610 TO 20090817;REEL/FRAME:023190/0209

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION