Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CENTRAL PROCESSING UNIT HAVING X86 AND DSP FUNCTIONAL UNITS
Document Type and Number:
WIPO Patent Application WO/1997/035249
Kind Code:
A1
Abstract:
A CPU or microprocessor which includes a general purpose CPU component, such as an X86 core, and also includes a DSP core. The CPU also includes an intelligent DSP function decoder or preprocessor which examines X86 opcode sequences and determines if a DSP function is being executed. If the DSP function decoder determines that a DSP function is being executed, the DSP function decoder converts or maps the opcodes to a DSP macro instruction that is provided to the DSP core. The DSP core executes one or more DSP instructions to implement the desired DSP function in response to the macro instruction. The DSP core implements or performs the DSP function using a lesser number of instructions and also in reduced number of clock cycles, thus increasing system performance. If the X86 opcodes in the instruction cache or instruction memory do not indicate or are not intended to perform a DSP-type function, the opcodes are provided to the X86 core as which occurs in current prior art computer systems. The X86 core and the DSP core are coupled to each other and communicate data and timing signals for synchronization purposes. Thus, the DSP core offloads these mathematical functions from the X86 core, thereby increasing system performance. The DSP core also operates in parallel with the X86 core, providing further performance benefits. The CPU of the present invention thus implements DSP functions more efficiently than X86 logic while requiring no additional X86 opcodes. The present invention also generates code that operates transparently on an X86 only CPU or a CPU according to the present invention which includes X86 and DSP cores. Thus the present invention is backwards compatible with existing software.

Inventors:
ASGHAR SAF
IRETON MARK A
BARTKOWIAK JOHN G
Application Number:
PCT/US1996/019585
Publication Date:
September 25, 1997
Filing Date:
December 11, 1996
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ADVANCED MICRO DEVICES INC (US)
International Classes:
G06F9/318; G06F9/38; G06F15/78; (IPC1-7): G06F9/38; G06F9/318; G06F15/78
Foreign References:
EP0071028A21983-02-09
EP0478904A21992-04-08
US5355485A1994-10-11
Other References:
HALFHILL T R: "AMD K6 TAKES ON INTEL P6", BYTE, vol. 21, no. 1, 1 January 1996 (1996-01-01), pages 67/68, 70, 72, XP000545340
Download PDF:
Claims:
Claims
1. A central processing unit which includes general puφose execution units and digital signal processmg execution units, comprising an mstmction memory for stormg a plurality of mstmctions. wherein said instmction memory stores one or more sequences of mstmctions which implement a digital signal processing function. an mstmction decoder coupled to the mstmction memory which examines said one or more sequences of mstmctions stored in said mstruction memory and determmes whether a sequence of instructions in said mst ction memory implements a digital signal processing function, wherem said instruction decoder converts a sequence of mstmctions said mstmction memory which implements a digital signal processmg function mto a digital signal processmg function identifier, one or more general purpose execution units coupled to the mstmction decoder for executing mstmctions m said mstruction memory, one or more digital signal processmg execution units coupled to the instruction decoder for performing digital signal processing functions in response to a received digital signal processing function identifier wherein the mstmction decoder transfers mstmctions which do not implement a digital signal processing function to the one or more general puφose execution units, wherein the mstruction decoder transfers digital signal processing function identifiers to the one or more digital signal processmg execution units, and wherein the one or more digital signal processmg execution units receive said digital signal processmg function identifiers and perform digital signal processmg functions m response to said received digital signal processmg function identifiers .
2. The central processmg unit of claim 1, wherem said mstmction memory stores a first sequence of mstmctions which does not implement a digital signal processmg function, and wherein said mstmction memory stores a second sequence of mstmctions which implements a digital signal processmg function. wherem said one or more general puφose execution units execute said first sequence of instructions, wherem said one or more digital signal processmg execution units perform a digital signal processing function m response to said received digital signal processmg function identifier, wherem said digital signal processmg function is substantially equivalent to execution of said second sequence of mstmctions.
3. The central processmg unit of claim 1 , wherem said one or more digital signal processmg execution units provide data and timing signals to said one or more general puφose execution units.
4. The central processmg unit of claim 1, wherem said mstruction decoder generates a digital signal processmg function identifier and one or more parameters in response to a sequence of instructions in said instruction memorv which implements a digital signal processing function, wherem said mst ction decoder transfers said digital signal processmg function identifier and said one or more parameters to the one or more digital signal processing execution units, and wherein the one or more digital signal processing execution units receive said digital signal processing function identifier and said one or more parameters and perform a digital signal processing function m response to said received digital signal processmg function identifier and said one or more parameters .
5. The central processmg unit of claim 1 wherem said one or more digital signal processmg execution units are adapted for performing one or more mathematical operations from the group consistmg of convolution correlation. Fast Fourier Transforms, and inner product.
6. The central processmg unit of claim 1 wherein said one or more general puφose execution units and said one or more digital signal processmg execution units operate substantially in parallel.
7. The central processmg unit of claim 1 , wherein said central processmg unit is compatible with the X86 family of microprocessors.
8. The central processing unit of claim 1 wherein said plurality of mstmctions in said mstmction memory are X86 opcode mstmctions.
9. A method for executing instructions m a central processing unit (CPU), wherem the CPU mcludes one or more general puφose execution units and one or more digital signal processing execution units, the method compnsing storing a plurality of mstmctions m an mstruction memory, where said plurality of instructions mclude one or more sequences of mstmctions which implement a digital signal processing function examining said plurality of mstructions in said mstruction memory, determmmg whether a sequence of mstmctions m said mstruction memory implements a digital signal processmg function. transferring mstmctions which do not implement a digital signal processing function to the one or more general puφose execution units, the one or more general purpose execution units receivmg said mstmctions which do not implement a digital signal processmg function, the one or more general purpose execution units executmg said mstmctions which do not implement a digital signal processmg function, conveπmg a sequence of mstmctions said mstmction memory which implements a digital signal processmg function mto a digital signal processmg function identifier, transferring said digital signal processmg function identifier to the one or more digital signal processmg execution units. the one or more digital signal processmg execution units receiving said digital signal processmg function identifier, and the one or more digital signal processmg execution units performing a digital signal processmg function in response to said received digital signal processing function identifier.
10. The method of claim 9 further compπsmg stormg a first sequence of mstmctions in said mstmction memory which implements a first digital signal processmg function, stormg a second sequence of mstmctions in said instmction memory which does not implement a digital signal processmg function, converting said first sequence of mstmctions in said mstruction memory which implements said first digital signal processmg function mto a first digital signal processmg function identifier, said one or more digital signal processmg execution units performing said first digital signal processmg function m response to said received first digital signal processmg function identifier wherem said first digital signal processing function is substantially equivalent to execution of said first sequence of mstructions, and said one or more general puφose execution units executmg said second sequence of mstructions.
11. The method of claim 9, further compπsmg said one or more digital signal processmg execution units providmg data and timing signals to said one or more general purpose execution units.
12. The method of claim 9, further compnsing said mstruction decoder generating a digital signal processmg function identifier and one or more parameters in response to said determmmg that said sequence of mstmctions said mstruction memory implements a digital signal processmg function, transferring said digital signal processmg function identifier and said one or more parameters to the one or more digital signal processmg execution units, and the one or more digital signal processmg execution units receivmg said digital signal processmg function identifier and said one or more parameters, and the one or more digital signal processmg execution units performing a digital signal processmg function m response to said received digital signal processmg function identifier and said one or more parameters.
13. The method of claim 9, further compπsmg the one or more digital signal processmg execution units performing one or more mathematical operations from the group consistmg of convolution, correlation, Fast Fourier Transforms, and inner product.
14. The method of claim 9. wherem the one or more general puφose execution units executmg said mstmctions which do not implement a digital signal processmg function and the one or more digital signal processmg execution units performing digital signal processmg functions occur substantially in parallel.
15. The method of claim 9, wherem said central processmg unit is compatible with the X86 family of microprocessors.
16. The method of claim 9, wherem said storing said plurality of mstmctions m said mstmction memory compπses stormg a plurality of X86 opcode instructions.
Description:
Title: Central Processing Unit Having X86 and DSP Functional Units

Cross Reference to Related Applications

The following applications are related to the present application and are hereby incoφorated reference in their entirety

U S Patent Application Seπal No , titled "Central Processing Unit ing an X86 and DSP

Core and Including a DSP Function Decoder which Maps X86 Instructions to DSP Instructions" and filed , and which is assigned to Advanced Micro Devices Corp

U S Patent Application Seπal No , titled "Central Processing Unit Including a DSP Function Preprocessor Which Scans Instruction Sequences for DSP Functions" and filed . and which is assigned to Advanced Micro Devices Corp

U S Patent Application Seπal No , titled "Central Processing Unit Including a DSP Function

Preprocessor Having a Pattern Recognition Detector for Detecting Instruction Sequences which Perform DSP

Functions" and filed , and which is assigned to Advanced Micro Devices Corp U S Patent Application Serial No , titled "Central Processing Unit Including a DSP Function

Preprocessor Having a Look-up Table Apparatus for Detecting Instruction Sequences which Perform DSP Functions' and filed , and which is assigned to Advanced Micro Devices Corp

Field of the Invention The present invention relates to a computer system CPU or microprocessor which includes a general purpose core and a DSP core, wherein the CPU includes a DSP function decoder which detects general purpose opcode sequences intended to perform DSP-type functions and converts these opcodes into corresponding DSP macros for execution by the DSP core

Description of the Related Art

Personal computer svstems and general purpose microprocessors were origmallv developed for business applications such as word processing and spreadsheets, among others However, computer svstems are currently being used to handle a number of real time DSP-related applications, including multimedia applications having video and audio components, video capture and playback, telephony applications, speech recognition and synthesis, and communication applications, among others These real time or DSP-like applications typically require increased CPU floating point performance

One problem that has arisen is that general purpose microprocessors ongmally designed for business applications are not well suited for the real-time requirements and mathematical computation requirements of modern DSP-related applications, such as multimedia applications and communications applications For example, the X86 family of microprocessors from Intel Corporation are oriented toward integer-based calculations and memory management operations and do not perform DSP-type functions very well

As personal computer systems have evolved toward more real-time and multimedia capable systems, the general purpose CPU has been correspondingly required to perform more mathematically intensive DSP-type functions Therefore, many computer systems now include one or more digital signal processors which are dedicated towards these complex mathematical functions

A recent trend in computer system architectures is the movement toward "native signal processing (NSP)". Native signal processing or NSP was originally introduced by Intel Coφoration as a strategy to offload ceπain functions from DSPs and perform these functions within the main or general puφose CPU. The strategy presumes that, as performance and clock speeds of general puφose CPUs increase, the general puφose CPU is able to perform many of the functions formerly performed by dedicated DSPs. Thus, one trend in the microprocessor industry is an effort to provide CPU designs with higher speeds and augmented with DSP-type capabilities, such as more powerful floating point units. Another trend in the industry is for DSP manufacturers to provide DSPs that not only run at high speeds but also can emulate CPU-type capabilities such as memory management functions.

A digital signal processor is essentially a general puφose microprocessor which includes special hardware for executing mathematical functions at speeds and efficiencies not usually associated with microprocessors. In current computer system architectures, DSPs are used as co-processors and operate in conjunction with general purpose CPUs within the system. For example, current computer systems may include a general puφose CPU as the main CPU and include one or more multimedia or communication expansion cards which include dedicated DSPs. The CPU offloads mathematical functions to the digital signal processor, thus increasing system efficiency. Digital signal processors include execution units that comprise one or more arithmetic logic units (ALUs) coupled to hardware multipliers which implement complex mathematical algorithms in a pipelined manner. The instruction set primarily comprises DSP-type instructions and also includes a small number of instructions having non-DSP functionality.

The DSP is typically optimized for mathematical algorithms such as correlation, convolution, finite impulse response (FIR) filters, infinite impulse response (IIR) filters, Fast Fourier Transforms (FFTs), matrix computations, and inner products, among other operations. Implementations of these mathematical algorithms generally comprise long sequences of systematic arithmetic/multiplicative operations. These operations are interrupted on various occasions by decision-type commands. In general, the DSP sequences are a repetition of a very small set of instructions that are executed 70% to 90% of the time. The remaining 10% to 30% of the instructions are primarily Boolean/decision operations (or general data processing).

A general puφose CPU is comprised of an execution unit, a memory management unit, and a floating point unit, as well as other logic. The task of a general puφose CPU is to execute code and perform operations on data in the computer memory and thus to manage the computing platform. In general, the general puφose CPU architecture is designed primarily to perform Boolean / management / data manipulation decision operations. The instructions or opcodes executed by a general-puφose CPU include basic mathematical functions. However these mathematical functions are not well adapted to complex DSP-type mathematical operations. Thus a general puφose CPU is required to execute a large number of opcodes or instructions to perform basic DSP functions.

Therefore, a computer system and CPU architecture is desired which includes a general puφose CPU and which also performs DSP-type mathematical functions with increased performance. A CPU architecture is also desired which is backwards compatible with existing software applications which presume that the general puφose CPU is performing all of the mathematical computations. A new CPU architecture is further desired which provides increased mathematical performance for existing software applications.

One popular microprocessor used in personal computer systems is the X86 family of microprocessors. The X86 family of microprocessors includes the 8088. 8086. 80186, 80286, 80386, i486. Pentium. and P6 microprocessors from Intel Coφoration. The X86 family of microprocessors also includes X86 compatible processors such as the 4486 and K5 processors from Advanced Micro Devices, the Ml processor from Cyrix

Coφoration. and the NextGen 5\86 and 6x86 processors from NextGen Corporation The X86 family of microprocessors was primarily designed and developed for business applications In general, the instruction set of the X86 family of microprocessors does not include sufficient mathematical or DSP functionality for modern multimedia and communications applications Therefore, a new X86 CPU architecture is further desired which implements DSP functions more efficiently than current X86 processors, but also requires no additional opcodes for the X86 processor

Summary of the Invention

The present invention comprises a CPU or microprocessor which includes a general puφose CPU component, such as an X86 core, and also includes a DSP core The CPU includes an intelligent DSP function decoder or preprocessor which examines sequences of instructions or opcodes (X86 opcodes) and determines if a DSP function is being executed If the DSP function decoder determines that a DSP function is being executed, the DSP function decoder converts or maps the instruction sequence to a DSP macro instruction or function identifier that is provided to the DSP core The DSP core executes one or more DSP instructions to implement the desired DSP function indicated by the DSP macro or function identifier The DSP core performs the DSP function in parallel with other operations performed by the general puφose CPU core The DSP core also performs the DSP function usmg a lesser number of instructions and also in a reduced number of clock cycles, thus increasing system performance

In the preferred embodiment, the CPU of the present invention includes an instruction memory or instruction cache which receives microprocessor mstructions or opcodes from the system memory and stores these opcodes for use by the CPU The CPU also includes a DSP function decoder or preprocessor also referred to as an instruction sequence preprocessor which analyzes instruction sequences in the instruction cache and intelligently determines when a DSP-type function is implemented bv or represented bv the instruction sequence The function preprocessor scans ahead for instruction sequences in the instruction cache that implement DSP functions

In one embodiment, the function preprocessor includes a pattem recognition detector which stores a plurality of bit pattems indicative of instruction sequences which implement DSP functions The pattem recognition detector compares each pattem with an instruction sequence and determines if one of the pattems substantially matches the instruction sequence In one embodiment, a substantial match occurs when a pattem matches the mstruction sequence by greater than 90% In another embodiment, the function preprocessor includes a look-up table which stores a plurality of bit pattem entries indicative of instruction sequences which implement DSP functions The function preprocessor compares each pattem entry with an instruction sequence and determines if one of the entries exactly matches the instruction sequence Other embodiments include a two stage determination of a look-up table and a pattem recognition detector

In the preferred embodiment the function preprocessor detects X86 instruction sequences which are intended to perform DSP-type functions such as convolution, correlation, Fast Fourier Transforms (FFTs), finite impulse response (FIR) filters, infinite impulse response (IIR) filters, inner products and matrix manipulation operations

If the instructions in the instruction cache or instruction memory do not implement a DSP-type function, the instructions are provided to the general puφose or X86 core, or to one or more X86 execution units, as which occurs in current prior art computer systems Thus the X86 core executes general puφose X86 instructions which do not represent DSP functions

When the function preprocessor detects a sequence of X86 instructions which implement a DSP function, l e . are intended to perform a DSP-type function, the function preprocessor decodes the sequence of X86 instructions and generates a single macro or function identifier which represents the function indicated by the sequence of X86 instructions The function preprocessor also examines information in the X86 instruction sequence and generates zero or more parameters which indicate the data values being used for the DSP-type

operation The function preprocessor then provides the function identifier and the various necessarv parameters to the DSP core, or to one or more DSP execution units

The DSP core receives the macro or function identifier and the respective parameters and uses the macro to index into a DSP microcode sequence which implements the indicated DSP function The DSP core also uses the respective parameters in executing the DSP function Since the DSP core is optimized for these DSP-type mathematical operations, the DSP core can generally execute the desired function in a reduced number of instructions and clock cycles

The DSP core executes in parallel with the general puφose CPU core Thus X86 (non-DSP) opcodes are potentially executed by the general puφose CPU core or X86 core in parallel with DSP functions, assuming there is data independence The general puφose core and the DSP core are coupled to each other and communicate data and timing signals for synchronization puφoses In one embodiment, a cache or buffer is comprised between the general puφose core and the DSP core for the transfer of information between the two units

Thus, the general puφose CPU portion executes X86 instructions as in prior systems However for those instruction sequences which are intended to perform DSP-type functions, the function preprocessor intelligently detects these sequences and provides a corresponding macro and parameters to the DSP core Thus, the DSP core offloads these mathematical functions from the general puφose core, thereby increasing system performance The

DSP core also operates in parallel with the general puφose core, providing further performance benefits

Therefore the present invention compπses a general puφose CPU including a DSP core which performs DSP operations The CPU includes an intelligent DSP function decoder or preprocessor which examines instruction sequences and converts or maps sequences which perform DSP functions to a DSP macro instruction for execution by the DSP core The DSP core uses the DSP macro instruction to implement the desired DSP function The DSP core implements or performs the DSP function in a lesser number of instructions and also in a reduced number of clock cycles, thus increasing system performance The CPU of the present invention thus implements DSP functions more efficiently than X86 logic while requiring no additional X86 opcodes The CPU of the present invention also executes code that operates on an X86-only CPU. thus providing backwards compatibility with existing software Fuπher. code written for the CPU of the present invention also operates properK on an X86- only CPU

Brief Description of the Drawings

A better understanding of the present invention can be obtained when the following detailed description of the preferred embodiment is considered in conjunction with the following drawings, in which

Figure 1 is a block diagram of a computer system including a CPU having a general puφose CPU core and a DSP core according to the present invention

Figure 2 is a block diagram of the CPU of Figure 1 including a general puφose CPU core and a DSP core and including a DSP function preprocessor according to the present invention,

Figure 3 is a flowchart diagram illustrating operation of the present invention. Figure 4 is a more detailed block diagram of the CPU of Figure 1. Figure 5 is a block diagram of the Instruction Decode Unit of Figure 4,

Figure 6 is a block diagram of the function preprocessor according to one embodiment of the invention, Figure 7 is a block diagram of the function preprocessor including a pattem recognition detector according to one embodiment of the invention,

Figure 8 illustrates operation of the pattem recognition detector of Figure 7 Figure 9 is a block diagram of the function preprocessor including a look-up table according to one embodiment of the invention.

Figure 10 illustrates operation of the look-up table of Figure 9. and

Figure 1 1 is a block diagram of the function preprocessor including a pattem recognition detector and a look-up table according to one embodiment of the invention

Detailed Description of the Preferred Embodiment

Incorporation bv Reference

Pentium Svstem Architecture by Don Anderson and Tom Shanley and available from Mmdshare Press. 2202 Buttercup Dr . Richardson, TX 75082 (214) 231-2216. is hereby incoφorated reference in its entirety Digital Signal Processing Applications Using the ADSP-2100 Family Volumes I and 2, 1995 edition available from Analog Devices Coφoration of Norwood MA. is hereby incoφorated by reference in its entirety

The Intel CPU Handbook, 1994 and 1995 editions, available from Intel Coφoration, are hereby incoφorated by reference in their entirety

The AMD 5 Handbook. 1995 edition available from Advanced Micro Devices Coφoration, is hereby incoφorated by reference in its entirety

Computer Svstem Block Diagram

Referring now to Figure I , a block diagram of a computer system incoφorating a central processing unit (CPU) or microprocessor 102 according to the present invention is shown The computer svstem shown in Figure 1 is illustrative only, and the CPU 102 of the present invention may be incoφorated into anv of various types of computer svstems

As shown, the CPU 102 includes a general puφose CPU core 212 and a DSP core 214 The general puφose core 212 executes general puφose (non-DSP) opcodes and the DSP core 214 executes DSP-type functions, as descπbed further below In the preferred embodiment, the general puφose CPU core 212 is an X86 core, l e., is compatible with the X86 family of microprocessors However, the general puφose CPU core 212 may be any of various types of CPUs, including the PowerPC family, the DEC Alpha, and the SunSparc family of processors, among others In the following disclosure, the general puφose CPU core 212 is referred to as an X86 core for convenience The general puφose core 212 may comprise one or more general puφose execution units, and the DSP core 214 may compπse one or more digital signal processing execution units As shown, the CPU 102 is coupled through a CPU local bus 104 to a host/PCI/cache bridge or chipset

106 The chipset 106 is preferably similar to the Triton chipset available from Intel Coφoration A second level or L2 cache memory (not shown) may be coupled to a cache controller in the chipset, as desired Also, for some processors the external cache may be an Ll or first level cache The bridge or chipset 106 couples through a memory bus 108 to main memory 1 10 The main memory 1 10 is preferably DRAM (dynamic random access memory) or EDO (extended data out) memory, or other types of memory, as desired

The chipset 106 includes various peripherals, including an interrupt system a real time clock (RTC) and timers, a direct memory access (DMA) system, ROM/Flash memory, communications ports, diagnostics ports, command/status registers, and non-volatile static random access memory (NVSRAM) (all not shown)

The host/PCI/cache bridge or chipset 106 interfaces to a peripheral component interconnect (PCI) bus 120 In the preferred embodiment, a PCI local bus is used However, it is noted that other local buses may be used such as the VESA (Video Electronics Standards Association) VL bus Various types of devices may be connected to the PCI bus 120 In the embodiment shown in Figure 1, a video/graphics controller or adapter 170 and a network interface controller 140 are coupled to the PCI bus 120 The video adapter connects to a video monitor 172. and the network interface controller 140 couples to a local area network (LAN) A SCSI (small computer svstems interface) adapter 122 may also be coupled to the PCI bus 120. as shown The SCSI adapter 122

may couple to various SCSI devices 124 such as a CD-ROM drive and a tape as desired Various other devices may be connected to the PCI bus 120, as is well known in the art

Expansion bus bridge logic 150 may also be coupled to the PCI bus 120 The expansion bus bridge logic

150 interfaces to an expansion bus 152 The expansion bus 152 may be any of varving vpes including the industry standard architecture (ISA) bus, also referred to as the AT bus, the extended industry standard architecture (EISA) bus. or the MicroChannel architecture (MCA) bus Various devices may be coupled to the expansion bus 152, such as expansion bus memory 154 and a modem 156

CPU Block Diagram Referring now to Figure 2, a high level block diagram illustrating certain components in the CPU 102 of

Figure 1 is shown As shown, the CPU 102 includes an instruction cache or instruction memory 202 which receives instructions or opcodes from the system memory 1 10 Function preprocessor 204 is coupled to the instruction memory 202 and examines instruction sequences or opcode sequences in the instruction memory 202 The function preprocessor 204 is also coupled to the X86 core 212 and the DSP core 214 As shown, the function preprocessor 204 provides instructions or opcodes to the X86 core 212 and also provides information to the DSP core 214

The X86 core 212 and DSP core 214 are coupled together and provide data and timing signals between each other In one embodiment, the CPU 102 includes one or more buffers (not shown) which interface between the X86 core 212 and the DSP core 214 to facilitate transmission of data between the X86 core 212 and the DSP core 214

Figure 3 - Flowchart

Referring now to Figure 3, a flowchart diagram illustrating operation of the present invention is shown It is noted that two or more of the steps in Figure 3 may operate concurrently, and the operation of the invention is shown in flowchart form for convenience As shown in step 302 the instruction memorv 202 receives and stores a plurality of X86 instructions The plurality of X86 instructions may include one or more instruction sequences which implement a DSP function In step 304 the function preprocessor 204 analvzes the opcodes, i e . an instruction sequence, in the instruction memory 202 and in step 306 intelligently determines if the sequence of instructions are designed or mtended to perform a DSP-type function, I e , determines if the instruction sequence implements a DSP-type function In the present disclosure, a DSP-type function comprises one or more of the following mathematical functions correlation, convolution, Fast Fourier Transform, finite impulse response filter, infinite impulse response filter, inner product, and matrix manipulation, among others The operation of the function preprocessor 204 is described more fully in the description associated with Figure 4

If the instructions or opcodes stored in the instruction cache 202 do not correspond to a DSP-type function, the instructions are provided to the X86 core 212 in step 308 Thus, these instructions or opcodes are provided directly from the instruction cache 202 to the X86 core 212 for execution as occurs in prior art X86 compatible CPUs After the opcodes are transferred to the X86 core 212, in step 310 the X86 core 212 executes the instructions

If the function preprocessor 204 detects a sequence of instructions which correspond to or implement a DSP-type function in step 306 then in step 312 the function preprocessor 204 analyzes the sequence of instructions and determines the respective DSP-type function being implemented In step 312 the function preprocessor 204

maps the sequence of instructions to a respective DSP macro identifier, also referred to as a function identifier The function preprocessor 204 also analyzes the information in the sequence of opcodes in step 312 and generates zero or more parameters for use by the DSP core or accelerator 214 in executing the function identifier As shown in step 314 the function preprocessor 204 provides the function identifier and the parameters to the DSP core 214 The DSP core 214 receives the function identifier and the associated parameters from the function preprocessor 204 and in step 316 performs the respective DSP function In the preferred embodiment, the DSP core 214 uses the function identifier to index into a DSP microcode RAM or ROM to execute a sequence of DSP instructions or opcodes The DSP instructions cause the DSP to perform the desired DSP-type function The DSP core 214 also uses the respective parameters in executing the DSP function As mentioned above the X86 core 212 and DSP core 214 are coupled together and provide data and timing signals between each other In the preferred embodiment, the X86 core 212 and DSP core 214 operate substantially in parallel Thus, while the X86 core 212 is executing one sequence of opcodes, the DSP accelerator 214 may be executing one or more DSP functions corresponding to another sequence of opcodes Thus, the DSP core 214 does not operate as a slave or co-processor, but rather operates as an independent execution unit or pipeline The DSP core 214 and the X86 core 212 provide data and timing signals to each other to indicate the status of operations and also to provide any data outputs produced, as well as to ensure data coherency / independence

Example Operation The following describes an example of how a string or sequence of X86 opcodes are converted into a f nction identifier and then executed by the DSP core or accelerator 214 according to the present invention The following describes an X86 opcode sequence which performs a simple inner product computation, wherein the inner product is averaged over a vector comprising 20 values

X86 Code

(Simple inner product)

1 Mov ECX, num_samples, {Set up parameters for macro}

1 Mov ESI. address_l , 1 Mov EDI, address_2,

1 Mov EAX, 0. {Initialize vector indices}

1 Mov EBX. 0

4 FLdZ, {Initialize sum of products}

Again {Update counter}

4 Fid dword ptr [ESI+EAX*4], {Get vector elements and}

1 Inc EAX, {update indices}

4 Fid dword ptr [EDI+EBX ],

1 Inc EBX, 13 FMulP St( I ). St, { Compute product term }

7 FAddP St(l ), St, {Add term to sum}

1 LOOP Again. {Continue if more terms}

As shown, the X86 opcode instructions for a simple inner product comprised a plurality of move instructions followed by an F-load function wherein this sequence is repeated a plurality of times If this X86 opcode sequence were executed by the X86 core 212, the execution time for this inner product computation would require 709 (9 - 20 X 35) This assumes i486 timing, concurrent execution of floating point operations, and

cache hits for all instructions and data required for the inner product computation The function preprocessor 204 analyzes the sequence of opcodes and detects that the opcodes are performing an inner product computation The function preprocessor 204 then converts this entire sequence of X86 opcodes into a single macro or function identifier and one or more parameters An example macro or function identifier that is created based on the X86 opcode sequence shown above would be as follows

Example Macro (as it appears in assembler) Inner_product_sιmple ( address_ 1 , { Data vector } address_2. {Data vector} num samples), {Length of vector}

This function identifier and one or more parameters are provided to the DSP core 214 The DSP core 214 uses the macro provided from the function preprocessor 204 to load one or more DSP opcodes or instructions which execute the DSP function In the preferred embodiment the DSP core 214 uses the macro to index into a

ROM which contains the instructions used for executing the DSP function ln this example, the DSP code or instructions executed by the DSP core 214 in response to receiving the macro described above are shown below DSP Code

(Simple inner product)

1 Cntr = num samples. {Set up parameters from macro}

1 ptrl = address l,

1 ptr2 = address 2,

1 MAC = 0, {Initialize sum of products}

1 regl = *ptrl++, {Pre-load multiplier input registers} reg2 = *ptr2++,

1 Do LOOP until ce, {Specify loop parameters} 1 MAC += regl *reg2, {Form sum of products} regl = *ptrl++, reg2 = *ptr2++, LOOP {Continue if more terms}

ln this example, the DSP core 214 performs this inner product averaged over a vector comprising 20 values and consumes a total of 26 cycles (6 + 20 X 1) This assumes typical DSP timing, including a single cycle operation of instructions, zero overhead looping and cache hits for all instructions and data Thus, the DSP core 214 provides a performance increase of over 28 times of that where the X86 core 212 executes this DSP function

Figure 4 - CPU Block Diagram

Referring now to Figure 4, a more detailed block diagram is shown illustrating the internal components of the CPU 102 according to the present invention Elements in the CPU 102 that are not necessary for an understanding of the present invention are not descπbed for simplicity As shown in the preferred embodiment the CPU 102 includes a bus interface unit 440. instruction cache 202. a data cache 444. an instruction decode unit 402 a plurality of execute units 448. a load/store unit 450, a reorder buffer 452 a register file 454. and a DSP unit 214 As shown, the CPU 102 includes a bus interface unit 440 which includes circuitry for performing communication upon CPU bus 104 The bus interface unit 440 interfaces to the data cache 444 and the instruction cache 202 The instruction cache 202 prefetches instructions from the system memory 1 10 and stores the

instructions for use by the CPU 102 The instruction decode unit 402 is coupled to the instruction cache 202 and receives instructions from the instruction cache 202 The instruction decode unit 402 includes function preprocessor 204 as shown The function preprocessor 204 in the instruction decode unit 402 is coupled to the instruction cache 202 The instruction decode unit 402 further includes an instruction alignment unit as well as other logic

The instruction decode unit 402 couples to a plurality of execution units 448 reorder buffer 452 and load/store unit 450 The plurality of execute units are collectively referred to herein as execute units 448 Reorder buffer 452. execute units 448, and load/store unit 450 are each coupled to a forwarding bus 458 for forwarding of execution results Load/store unit 450 is coupled to data cache 444 DSP unit 214 is coupled directly to the instruction decode unit 402 through the DSP dispatch bus 456 It is noted that one or more DSP units 214 may be coupled to the instruction decode unit 402

Bus interface unit 440 is configured to effect communication between microprocessor 102 and devices coupled to system bus 104 For example, instruction fetches which miss instruction cache 202 are transferred from main memory 1 10 by bus interface unit 440 Similarly, data requests performed by load/store unit 450 which miss data cache 444 are transferred from main memory 1 10 by bus interface unit 440 Additional lv data cache 444 may discard a cache line of data which has been modified by microprocessor 102 Bus interface unit 440 transfers the modified line to main memory 1 10

Instruction cache 202 is preferably a high speed cache memory for storing instructions It is noted that instruction cache 202 may be configured into a set-associative or direct mapped configuration Instruction cache 202 may additionally include a branch prediction mechanism for predicting branch instructions as either taken or not taken A "taken" branch instruction causes instruction fetch and execution to continue at the target address of the branch instruction A "not taken" branch instruction causes instruction fetch and execution to continue at the instruction subsequent to the branch instruction. Instructions are fetched from instruction cache 202 and conveyed to mstruction decode unit 402 for decode and dispatch to an execution unit The instruction cache 202 may also include a macro prediction mechanism for predicting macro instructions and taking the appropπate action

Instruction decode unit 402 decodes instructions received from the instruction cache 202 and provides the decoded instructions to the execute units 448, the load/store unit 450, or the DSP unit 214 The instruction decode unit 402 is preferably configured to dispatch an instruction to more than one execute unit 448

The instruction decode unit 402 includes function preprocessor 204 According to the present invention, the function preprocessor 204 m the instruction decode unit 402 is configured to detect X86 instruction sequences in the instruction cache 202 which correspond to or perform DSP functions If such an instruction sequence is detected, the function preprocessor 204 generates a corresponding macro and parameters and transmits the corresponding DSP macro and parameters to the DSP Unit 214 upon DSP dispatch bus 456 The DSP unit 214 receives the DSP function macro and parameter information from the instruction decode unit 402 and performs the indicated DSP function Additionally, DSP unit 214 is preferably configured to access data cache 444 for data operands Data operands may be stored in a memory within DSP unit 214 for quicker access, or may be accessed directly from data cache 444 when needed Function preprocessor 204 provides feedback to instruction cache 202 to ensure that sufficient look ahead instructions are available for macro searching

If the X86 instructions in the instruction cache 202 are not intended to perform a DSP function, the instruction decode unit 402 decodes the instructions fetched from instruction cache 202 and dispatches the instructions to execute units 448 and or load/store unit 450 Instruction decode unit 402 also detects the register

operands used by the instruction and requests these operands from reorder buffer 452 and register file 454 Execute units 448 execute the X86 instructions as is known in the art

Also, if the DSP 214 is not included in the CPU 102 or is disabled through software instruction decode unit 402 dispatches all X86 instructions to execute units 448 Execute units 448 execute the X86 instructions as in the prior art In this manner if the DSP unit 214 is disabled the X86 code including the instructions which perform DSP functions, are executed by the X86 core, as is currently done in prior art X86 microprocessors Thus, if the DSP unit 214 is disabled, the program executes correctly even though operation is less efficient than the execution of a corresponding routine in the DSP 214 Advantageously the enabling or disabling, or the presence or absence, of the DSP core 214 in the CPU 102 does not affect the correct operation of the program ln one embodiment, execute units 448 are symmetrical execution units that are each configured to execute the instruction set employed by microprocessor 102 In another embodiment, execute units 448 are asymmetrical execution units configured to execute dissimilar instruction subsets For example, execute units 448 may include a branch execute unit for executing branch instructions, one or more arithmetic/logic units for executing arithmetic and logical mstructions, and one or more floating point units for executing floating point mstmctions Instruction decode unit 402 dispatches an instruction to an execute unit 448 or load/store unit 450 which is configured to execute that instruction

Load/store unit 450 provides an interface between execute units 448 and data cache 444 Load and store memory operations are performed by load/store unit 450 to data cache 444 Additionally, memory dependencies between load and store memory operations are detected and handled by load/store unit 450 Execute units 448 and load/store unιt(s) 450 may include one or more reservation stations for storing instructions whose operands have not yet been provided. An instruction is selected from those stored in the reservation stations for execution if (1 ) the operands of the instruction have been provided, and (2) the instructions which are prior to the instruction being selected have not yet received operands It is noted that a centralized reservation station may be included instead of separate reservations stations The centralized reservation station is coupled between instruction decode unit 402. execute units 448. and load/store unit 450 Such an embodiment may perform the dispatch function within the centralized reservation station

CPU 102 preferably supports out of order execution and employs reorder buffer 452 for storing execution results of speculatively executed instructions and storing these results into register file 454 in program order, for performing dependency checking and register renaming, and for providing for mispredicted branch and exception recovery When an instruction is decoded by instruction decode unit 402, requests for register operands are conveyed to reorder buffer 452 and register file 454 In response to the register operand requests, one of three values is transferred to the execute unit 448 and/or load/store unit 450 which receives the instruction (1) the value stored in reorder buffer 452. if the value has been speculatively generated, (2) a tag identifying a location within reorder buffer 452 which will store the result, if the value has not been speculatively generated, or (3) the value stored in the register within register file 454, if no instructions withm reorder buffer 452 modify the register Additionally, a storage location within reorder buffer 452 is allocated for storing the results of the instruction being decoded bv instruction decode unit 402 The storage location is identified by a tag, which is conveyed to the unit receiving the instruction It is noted that, if more than one reorder buffer storage location is allocated for storing results corresponding to a particular register, the value or tag corresponding to the last result in program order is conveyed in response to a register operand request for that particular register

When execute units 448 or load/store unit 450 execute an instruction the tag assigned to the instruction bv reorder buffer 452 is conveyed upon result bus 458 along with the result of the instruction Reorder buffer 452 stores the result in the indicated storage location Additionally, execute units 448 and load/store unit 450 compare the tags conveyed upon result bus 458 with tags of operands for instructions stored therein If a match occurs, the unit captures the result from result bus 458 and stores it with the corresponding instruction ln this manner, an instmction may receive the operands it is intended to operate upon Capturing results from result bus 458 for use by mstmctions is referred to as "result forwarding"

Instruction results are stored into register file 454 by reorder buffer 452 in program order Storing the results of an instmction and deleting the instruction from reorder buffer 452 is referred to as ' retiring" the instruction By retiring the instructions in program order, recovery from incorrect speculative execution may be performed For example, if an instruction is subsequent to a branch instmction whose taken/not taken prediction is incorrect, then the instmction may be executed incorrectly When a mispredicted branch instruction or an mstruction which causes an exception is detected, reorder buffer 452 discards the mstmctions subsequent to the mispredicted branch instructions Instructions thus discarded are also flushed from execute units 448 load/store unit 450. and instmction decode unit 402

Register file 454 includes storage locations for each register defined the microprocessor architecture employed by microprocessor 102 For example, in the preferred embodiment where the CPU 102 mcludes an x86 microprocessor architecture, the register file 454 includes locations for storing the EAX, EBX. ECX, EDX, ESI, EDI, ESP, and EBP register values Data cache 444 is a high speed cache memory configured to store data to be operated upon by microprocessor 102 It is noted that data cache 444 may be configured into a set-associative or direct-mapped configuration.

For more information regarding the design and operation of an X86 compatible microprocessor, please see co-pending patent application entitled "High Performance Superscalar Microprocessor", Serial No 08/146,382, filed October 29 1993 by Witt, et al. and co-pending patent application entitled "Superscalar Microprocessor Including a High Performance Instmction Alignment Unit", Seπal No 08/377.843. filed January 25 1995 by Witt, et al, which are both assigned to the assignee of the present application, and which are both hereby incoφorated by reference in their entiretv as though fully and completely set forth herein Please also see "Superscalar Microprocessor Design" by Mike Johnson, Prentice-Hall, Englewood Cliffs. New Jersey, 1991 , which is hereby incoφorated herein by reference in its entirety

Figure 5 - Instmction Decode Unit

Referring now to Fig 5, one embodiment of instmction decode unit 402 is shown Instmction decode unit

402 includes an instruction alignment unit 460, a plurality of decoder circuits 462, and a DSP function preprocessor 204 Instruction alignment unit 460 is coupled to receive instructions fetched from instmction cache 202 and aligns instructions to decoder circuits 462

Instmction alignment unit 260 routes instructions to decoder circuits 462 In one embodiment, instruction alignment unit 260 includes a byte queue in which instmction bytes fetched from instmction cache 202 are queued

Instmction alignment unit 460 locates valid instructions from within the byte queue and dispatches the mstmctions to respective decoder circuits 462 In another embodiment, instruction cache 202 includes predecode circuitrv which predecodes instmction bvtes as they are stored into instruction cache 202 Start and end bvte information

indicative of the beginning and end of mstmctions is generated and stored within instruction cache 202 The predecode data is transferred to instmction alignment unit 460 along with the mstmctions and instmction alignment unit 460 transfers mstmctions to the decoder circuits 462 according to the predecode information

The function preprocessor 204 is also coupled to the instmction cache 202 and operates to detect instruction sequences in the instmction cache 202 which perform DSP mstmctions Decoder circuits 462 and function preprocessor 204 receive X86 mstmctions from the instmction alignment unit 460 The function preprocessor 204 provides an instmction disable signal upon a DSP bus to each of the decoder units 462

Each decoder circuit 462 decodes the instmction received from instmction alignment unit 460 to determine the register operands manipulated by the instmction as well as the unit to receive the instmction An indication of the unit to receive the instmction as well as the instmction itself are conveyed upon a plurality of dispatch buses 468 to execute units 448 and load/store unit 450 Other buses, not shown are used to request register operands from reorder buffer 452 and register file 454

The function preprocessor analyzes streams or sequences of X86 instructions from the instmction cache 202 and determines if a DSP function if being executed If so, the function preprocessor 204 maps the X86 instmction stream to a DSP macro and zero or more parameters and provides this information to one of the one or more DSP units 214 ln one embodiment when the respective instruction sequence reaches the decoder circuits 462, the function preprocessor 204 asserts a disable signal to each of the decoders 462 to disable operation of the decoders 462 for the detected instmction sequence When a decoder circuit 462 detects the disable signal from function preprocessor 204, the decoder circuit 462 discontinues decoding operations until the disable signal is released After the instruction sequence corresponding to the DSP function has exited the instruction cache 202, the function preprocessor 204 removes the disable signal to each of the decoders 462 In other words, once the function preprocessor 204 detects the end of the X86 instmction sequence, the function preprocessor 204 removes the disable signal to each of the decoders 462, and the decoders resume operation

Each of decoder circuits 462 is configured to convey an instmction upon one of dispatch buses 468, along with an indication of the unit or units to receive the instruction In one embodiment, a bit is included withm the indication for each of execute units 448 and load/store unit 450 If a particular bit is set the corresponding unit is to execute the instmction If a particular instmction is to be executed by more than one unit more than one bit in the indication may be set

Function Preprocessor

Referring now to Figure 6, a block diagram of the function preprocessor 204 is shown according to one embodiment of the invention As shown, in this embodiment the function preprocessor 204 comprises a scan-ahead circuit 502 for examining or scanning sequences of instructions in the instruction memory or instruction cache 202 In one embodiment, the scan-ahead circuit or means 502 examines sequences of instructions stored in the instmction memory 202 prior to operation of the instmction decoder 402 in decoding the mstmctions comprising the respective sequence of instructions being scanned Thus the scan-ahead circuit 502 looks ahead at instruction sequences in the instmction cache 202 before the respective mstmctions are provided to the instmction decoder 402

The function preprocessor 204 further comprises an instmction sequence determination circuit 504 for determining whether a sequence of instructions in the instruction memory 202 implements a digital signal processing function This determination can be performed in various ways, as described further below

The function preprocessor 204 further comprises a conversion / mapping circuit 506 for convening a sequence of mstmctions in the instmction memory 202 which implements a digital signal processing function into a digital signal processing function identifier or macro identifier and zero or more parameters Thus if the instruction sequence determination circuit 504 determines that a sequence of mstmctions in the instmction memory 202 implements an FFT function the conversion / mapping circuit 506 converts this sequence of mstmctions into a FFT function identifier and zero or more parameters

Figure 7 - Pattem Recognition Circuit

Referring now to Figure 7, in one embodiment the function preprocessor 204 includes a pattem recognition circuit or pattem recognition detector 512 which determines whether a sequence of instructions in the instmction memory 202 implements a digital signal processing function The pattem recognition circuit 512 stores a plurality of pattems of instruction sequences which implement digital signal processing functions The pattem recognition circuit 512 stores bit pattems which correspond to opcode sequences of machine language mstmctions which perform DSP functions, such as FFTs. inner products, matrix manipulation, correlation, convolution, etc The pattem recognition detector 512 examines a sequence of instructions stored in the instmction memory

202 and compares the sequence of mstmctions with the plurality of stored patterns Operation of the pattem recognition detector 512 is shown in Figure 8 In one embodiment, the pattem recognition detector 512 compares each of the pattems with an instmction sequence at periodic locations in the instmction sequence Alternatively, the pattem recognition detector 512 compares each of the pattems with an instmction sequence at predefined locations in the instruction sequence The pattem recognition detector 512 may include a look-up table as the unit which performs the pattem comparisons, as desired The pattem recognition detector 512 may also perform macro prediction on instruction sequences to improve performance

The pattem recognition detector 512 determines whether the sequence of mstmctions in the instruction memory 202 substantially matches one of the plurality of stored pattems A substantial match indicates that the sequence of mstmctions implements a digital signal processing function In the preferred embodiment, a substantial match occurs where the instmction sequence matches a stored pattem bv greater than 90% Other matching thresholds, such as 95%, or 100%. may be used, as desired If a match occurs, the pattem recognition detector 512 determines the type of DSP function pattem which matched the sequence of mstmctions and passes this DSP function type to the conversion / mapping circuit 506

Figure 9 - Look-up Table

Referring now to Figure 9 in another embodiment the function preprocessor 204 includes a look-up table 514 which determines whether a sequence of instructions in the instruction memory 202 implements a digital signal processmg function In this embodiment, the look-up table 514 may be in addition to or instead of, the pattem recognition detector 512

In an embodiment where the function preprocessor 204 includes only the look-up table 514. the look-up table 514 stores a plurality of pattems wherein each of the pattems is at least a subset of an instmction sequence which implements a digital signal processing function Thus, this embodiment is similar to the embodiment of Figure 6 described above except that the function preprocessor 204 includes the look-up table 514 instead of the pattem recognition detector 512 for detecting instmction sequences which implement DSP functions In addition. in this embodiment the look-up table 514 stores smaller pattems which correspond to smaller sequences of

mstmctions. i e subsets of instmction sequences which implement DSP functionalirv In this embodiment, the look-up table 514 requires an exact match with a corresponding sequence of instruction* If an exact match does not occur, then the sequence of instructions are passed to the one or more general puφo=e execution units, i e , the general puφose CPU core, for execution Figure 10 illustrates operation of the look-up table 514 in this embodiment \s shown a sequence of mstmctions in the instruction cache 202 are temporarily stored in the instmction latch 542 The contents of the instmction latch 542 are then compared with each of the entries in the look-up table 514 bv element 546 If the contents of the instruction latch 542 exactly match one of the entries in the look-up table 514. then the DSP function or instmction 548 which corresponds to this entry is provided to the DSP execution unit 214 In the above embodiments of Figures 7 and 9, the pattem recognition detector 512 and/or the look-up table

514 are configured to determine that an instmction sequence implements a DSP function only when the determination can be made with relative certainty This is because a "missed" instmction sequence, i e , an instruction sequence which implements a DSP function but which was not detected as implementing a DSP function, will not affect operation of the CPU 102, since the general puφose core or execution units can execute the instmction sequence However, an instmction sequence which does not implement a DSP function that is mis- identified as a sequence which does implement a DSP function is more problematic, and could result in possible erroneous operation Thus it is anticipated that the pattem recognition detector 512 or the look-up table 514 may not accurately detect every instruction sequence which implements a DSP function In this instance, the instruction sequence is passed on to one of the general puφose execution units, as occurs in the prior art

Figure 1 1 - Pattem Recognition Circuit with Look-up Table

Referring now to Figure 1 1 , in another embodiment the function preprocessor 204 includes both the look¬ up table 514 and the pattem recognition detector 512 In this embodiment, the function decoder 204 uses each of the look-up table 514 and the recognition detector 512 to determine whether a sequence of mstmctions in the instmction memory 202 implements a digital signal processing function This embodiment preferably uses a two stage analysis of a sequence of X86 mstmctions whereby the look-up table 514 first determines if the sequence likely implements a DSP function, and then the pattern recognition detector 512 determines the type of DSP function being implemented Alternatively, the pattem recognition detector 512 first determines if the sequence likely implements a DSP function, and then the look-up table 514 determines the type of DSP function being implemented

In this embodiment the look-up table 514 stores small pattems which correspond to atomic DSP instructions For example, the look-up table 514 stores a pattem of X86 instructions which perform a multiply accumulate add function, which is common in DSP architectures The look-up table 514 also stores other pattems which implement atomic DSP mstmctions The pattem recognition detector 512 stores patterns corresponding to entire DSP functions, such as an FFT, a correlation, and a convolution, among others

First, the look-up table 514 compares each entry with incoming instmction sequences and stores the number of "hits" or matches for a sequence If the number of matches is greater than a ceπain defined threshold, then the sequence includes a number of DSP-type "mstmctions" and thus is presumed to implement a DSP function In this instance, the pattem recognition detector 512 is enabled to compare the entire sequence with each of the stored pattems to determine the type of DSP function being implemented by the X86 instruction sequence

As mentioned above the pattem recognition detector 512 determines if the instmction sequence substantially matches one of the stored pattems

Conclusion Therefore, the present invention compπses a novel CPU or microprocessor architecture which optimizes execution of DSP and/or mathematical operations while maintaining backwards compatibility with existing software

Although the system and method of the present invention has been described in connection with the preferred embodiment it is not intended to be limited to the specific form set forth herein, but on the contrary, it is intended to cover such alternatives, modifications, and equivalents, as can be reasonably included within the spirit and scope of the invention as defined by the appended claims