Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VARIATION TOLERANT LATCH-BASED CLOCKING
Document Type and Number:
WIPO Patent Application WO/2022/098753
Kind Code:
A1
Abstract:
A clock pulse suppression logic to generate local non-overlapping 3-phase clock from a globally forwarded clock topology. The digest data path uses a global forwarded clock topology where the 2x frequency clock is forwarded unidirectionally from one pipeline stage to the next. Each pipeline stage implements an all-digital pulse suppression logic using a daisy-chained enable signal to locally generate the 3-phase clocks. An optimized data path is used for both digest and scheduler, where output inversion at each Boolean function and carry-save adder tree are removed, resulting in inverted outputs at each stage. The functional inversions are corrected or accounted for in the subsequent stages of the data path, generating the expected final hash output.

Inventors:
SURESH VIKRAM (US)
RAKHA RAJU (US)
MATHEW SANU (US)
KUMAR RAGHAVAN (US)
RAJAGOPALAN SRINIVASAN (US)
Application Number:
PCT/US2021/057890
Publication Date:
May 12, 2022
Filing Date:
November 03, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INTEL CORP (US)
International Classes:
H03K19/17728; G06Q20/06; H03K19/17736; H04L9/06
Foreign References:
US20180004242A12018-01-04
US20180097615A12018-04-05
CN105226986A2016-01-06
US20040021482A12004-02-05
US20200201679A12020-06-25
Attorney, Agent or Firm:
MOORE, Michael S. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. An apparatus comprising: a processor core; and a hardware accelerator coupled to the processor core, wherein the hardware accelerator includes: a plurality of latch stages; a plurality of message digest datapath rounds, wherein an individual message digest datapath round is coupled between two latch stages of the plurality of latch stages; and a plurality of clock generation logic circuits, wherein an individual clock generation logic circuit is coupled to an individual latch stage of the plurality of latch stages.

2. The apparatus of claim 1, wherein the plurality of clock generation logic circuits is to generate non-overlapping clocks including a first clock, a second, clock, and a third clock.

3. The apparatus of claim 2, wherein an individual clock generation logic circuit generates one of the first, second, or third clocks.

4. The apparatus of claim 2, wherein an individual clock generation logic circuit receives an enable from another individual clock generation logic circuit of the plurality of clock generation logic circuits.

5. The apparatus of claim 2, wherein the individual clock generation logic circuit includes: a sequential unit having a data input and a clock input, and a first output; and an AND gate to receive the first output and the clock input, and to generate a second output which is one of the first clock, second, clock, or third clock.

37

6. The apparatus of claim 5, wherein: the data input is to receive an enable; the clock input is to receive a clock having twice a frequency as that of the first clock, second, clock, or third clock; and the first output is an enable output for another individual clock generation logic circuit.

7. The apparatus of claim 2, wherein a plurality of message digest datapath rounds includes a first round and a second round, wherein the plurality of latch stages includes: a first latch clocked by the first clock, wherein an output of the first latch is coupled to an input of the first round; a second latch having an input coupled to an output of the first round, wherein the second latch is clocked by the second clock; and a third latch having an input coupled to an output of the second round, wherein the third latch is clocked by the third clock.

8. The apparatus of any of claims 1-7, wherein the individual message digest datapath includes a computational block to: precompute a first summation of a 32-bit message (Wi), a 32-bit round constant (Ki), and a content of a first shifted state register (Gi-i); and store a result of the first summation in a state register (Hi).

9. The apparatus of claim 8, wherein the computational block is further to: compute a compliment of a content of a second shifted state register (Di-i); compute a second summation of the compliment, a content of a second state register (Ei), and a computed value; and store a result of the second summation in a state register (Ai).

10. The apparatus of any of claims 1-7, wherein the hardware accelerator is to mine digital currency.

11. The apparatus of claim 10, wherein the digital currency is a Bitcoin.

38

12. The apparatus of claim 1, wherein an enable from each of the plurality of clock generation logic circuits daisy chains among the plurality of clock generation logic circuits.

13. An apparatus comprising: a processor core; and a hardware accelerator coupled to the processor core, wherein the hardware accelerator includes a data path where output inversion at a Boolean function and an output inversion at a carry-save adder tree are removed, resulting in inverted outputs at each stage.

14. The apparatus of claim 13 wherein the hardware accelerator further includes: a plurality of latch stages; a plurality of message digest datapath rounds, wherein an individual message digest datapath round is coupled between two latch stages of the plurality of latch stages; and a plurality of clock generation logic circuits, wherein an individual latch stage is coupled to an individual latch stage of the plurality.

15. The apparatus of claim 14, wherein the plurality of clock generation logic circuits is to generate non-overlapping clocks including a first clock, a second, clock, and a third clock.

16. The apparatus of claim 15, wherein an individual clock generation logic circuit generates one of the first, second, or third clocks.

17. The apparatus of claim 15, wherein an individual clock generation logic circuit receives an enable from another individual clock generation logic circuit of the plurality of clock generation logic circuits.

18. The apparatus of claim 15, wherein the individual clock generation logic circuit includes: a sequential unit having a data input and a clock input, and a first output; and an AND gate to receive the first output and the clock input, and to generate a second output which is one of the first clock, second, clock, or third clock. The apparatus of any of claims 13-18, wherein the hardware accelerator is to mine digital currency. The apparatus of claim 19, wherein the digital currency is Bitcoin.

Description:
VARIATION TOLERANT LATCH-BASED CLOCKING

RELATED APPLICATION

This application claims priority to US Provisional Patent Application Number 63/110,274 titled “VARIATION TOLERANT LATCH-BASED CLOCKING, AND ENERGY EFFICIENT BITCOIN MINING ACCELERATOR” filed November 5, 2020, the contents of which are incorporated by reference in their entirety.

BACKGROUND

Digital currency is an internet-based medium of exchange. Digital currency may be based on exchange rates for physical currency (e.g., the United States Dollar). Various types of digital currency exist, and may be used to buy physical goods and services from retailers that have agreed to accept the type of digital currency offered.

Bitcoin is the most popular type of (e.g., unit ol) digital currency used in the digital currency eco-system. The Bitcoin transactional system is peer-to-peer, meaning transactions take place between users directly, without an intermediary (e.g., without involving a bank). Peer-to-peer Bitcoin transactions may be verified by network nodes and recorded in a public distributed ledger called a blockchain, which uses Bitcoin as its unit of accounting.

As opposed to physical currency systems backed on natural resources (e.g., gold), Bitcoins may be created by using software and hardware systems to solve a series of mathematical algorithms (e.g., Secure Hash Algorithm 256 (SHA-256)). Bitcoins solve the critical issue of “double spending” using the concept of block chaining, where a public ledger captures all the transactions that occur in the digital currency system. Every block added to the chain validates a new set of transactions by compressing the Merkle root of the transactions along with information of the time stamp, version, target and the hash of the previous block. The process of validating transactions and computing new blocks of the chain is known as mining. The most expensive operation in mining involves the computationally intensive task of finding a 32-bit nonce, which when appended to the Merkle root, previous hash and other headers, produces a 256-bit hash value which is less than a pre-defined threshold value. This hashing operation is the largest recurring cost a miner incurs in the process of creating a Bitcoin and therefore there is a strong motivation to reduce the energy consumption of this process.

When the Bitcoin mining algorithms are solved in a way that satisfies certain predefined conditions, a new block is added to the blockchain and a certain number of Bitcoins are awarded to the miner; thereby introducing new Bitcoins into the eco-system. Bitcoin mining algorithms are inherently difficult to solve, and thus require large amounts of processing power. Because of the large amount of power utilized, and the relatively high cost of that power, mining Bitcoins can be a very costly endeavor.

BRIEF DESCRIPTION OF THE DRAWINGS

The embodiments of the disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure, which, however, should not be taken to limit the disclosure to the specific embodiments, but are for explanation and understanding only.

Fig. 1 A is a block diagram illustrating a computing system that implements a hardware accelerator according to one embodiment.

Fig. IB illustrates a high-level bitcoin mining accelerator organization, in accordance with some embodiments.

Fig. 2 illustrates a conventional 3-phase latch-based clocking apparatus.

Fig. 3 illustrates 3-phase latch-based clocking scheme for capturing and launching data.

Fig. 4 illustrates a circuit for generating the 3-phase clocks.

Fig. 5 illustrates a scheme for distribution of the 3-phase clock.

Fig. 6 illustrates a scheme with local clock generation using global forwarded clock with local suppression, in accordance with some embodiments.

Fig. 7 illustrates a local clock-pulse suppression logic, in accordance with some embodiments.

Fig. 8 illustrates variation tolerant latch-based clocking apparatus, in accordance with some embodiments.

Fig. 9 illustrates an optimized performance critical message digest data path, in accordance with some embodiments.

Fig. 10A illustrates a pass-gate implementation of functions Zo, Zi that performs a 3-way XOR operation on rotated version of A and E, respectively.

Fig. 10B illustrates a majority gate.

Fig. 10C illustrates a change (Ch) or multiplexer function.

Fig. 11 illustrates an inverted version of a carry-save adder.

Fig. 12 illustrates an optimized data path using an inverted logic, in accordance with some embodiments. Fig. 13 illustrates a message schedule data path.

Fig. 14 illustrates an optimized message scheduler data path with logic inversion, in accordance with some embodiments.

Fig. 15 illustrates an inverted XNOR gate, in accordance with some embodiments.

Fig. 16 illustrates a smart device or a computer system or a SoC (System-on-Chip) including variation tolerant latch-based clocking apparatus and/or energy efficient bitcoin mining accelerator, in accordance with some embodiments.

DETAILED DESCRIPTION

The Bitcoin mining operation may include two stages of Secure Hash Algorithm 256 (SHA-256) hashing to compress a 1024-bit message, followed by another round of SHA-256 hashing of the intermediate hash. The 1024-bit message may include a 32-bit nonce that may be incremented every cycle. A valid nonce may be found if the final hash is less than a predefined threshold value. This validity may be verified by checking if the final hash contains a predefined number of leading zeros. One challenge for miners may therefore be searching through the entire nonce space in a brute force manner while minimizing energy consumption per hash and maximizing performance per watt.

The most expensive operation in mining may involve the computationally intensive task of finding the 32-bit nonce (e.g., a 32-bit (4-byte) field whose value is set so that the hash of the block will contain a run of zeros), which when appended to the Merkle root (e.g., a hash of the transaction hashes in the blockchain), previous hash and other headers, produces a 256-bit hash value, which is less than a pre-defined threshold value. A typical SHA-256 datapath may include two major computational blocks - a message digest and a message scheduler with SHA-256 specific functions that combine multiple 32-bit words followed by 32-bit additions. The performance of the fully unrolled datapath may be limited by these two datapaths. This hashing operation may be the largest recurring cost a miner incurs in the process of creating a Bitcoin and therefore there is a strong motivation to reduce the energy consumption of this process.

The SHA-256 datapath used for proof-of-work in Bitcoin mining may be sequential cell dominated, resulting in significant area and clock power. To enable energyefficient mining, conventional designs use 3-phase latch-based clocking technique with each pipeline stage latch enabled by one of the three non-overlapping clocks. One of the advantages of the 3-phase non-overlapping clocks is the inherent 0.25xclk_period min- delay margin. This margin may eliminate the need for min-delay buffer insertion reducing area and power overhead. However, this margin may also result in a dead time in the gap between two clock phases where no computation takes place in the datapath. This dead time may result in overall loss in the hash throughput.

Legacy 3-phase clocking schemes may use a global clock generation circuit that generates the 3-phase non-overlapping clocks using an input 2x frequency clock. The three individual clocks are then distributed throughout one or more engines. While global clock generation may use fewer clock generation modules, distributing three separate clocks for a 128-stage fully-unrolled SHA256 data path may incur significant overhead. Since successive pipeline stages in the design are enabled by different clocks, on-chip-variations along the clock tree may affect the maximum and minimum delay yields dues the large clock path deviation.

Some embodiments relate to a clock pulse suppression logic to generate a local non-overlapping 3-phase clock from a globally forwarded clock topology. In some embodiments, a globally forwarded clock topology may be used where the 2x frequency clock is forwarded unidirectionally from one pipeline stage to the next. Each pipeline stage implements an all-digital pulse suppression logic using a daisy-chained enable signal to locally generate the 3-phase clocks.

There are many technical effects of the various embodiments. For example, the clocking technique of some embodiments may reduce the impact of on-chip-variation (OCV) on the bitcoin mining engine timing by, for example, greater than 90%. This may eliminate (or substantially reduce) the need for overdesigning to compensate for process variations and hence improve the area, performance and energy-efficiency of bitcoin mining Application Specific Integrated Circuit (ASIC). Global clock forwarding enables clock and data to be propagated in the same directions, enabling trade-off between cycle time available for each pipeline stage and leveraging some of the built-in min-delay margin from the 3-phase clocking. Further, local clock generation reduces the clock path divergence between successive pipeline stages by up to, for example, 24 x, recuing the impact of OCV by, for example, about 95%. The clocking technique may reduce or eliminate the need for distribution the separate clocks throughput the mining engine. Other technical effects will be evident from the various figures and embodiments.

Conventional SHA256 data paths for bitcoin mining implement digest and scheduler to match the actual functionality per the SHA specification. A direct implementation of the SHA256 functions gives exact outputs at all intermediate Boolean function and 32-bit modular additions. A CMOS implementation of the data path uses multiple inversions to achieve the exact functions. This results in additional area and power consumption to achieve the inversions.

In some embodiments, an optimized data path may be used for both digest and scheduler, where output inversion at each Boolean function and carry-save adder tree are removed, resulting in inverted outputs at each stage. The functional inversions are corrected or accounted for in the subsequent stages of the data path, generating the expected final hash output.

There are many technical effects of various embodiments. For example, removing functional inversions allows significant reduction in the number of inverters used in the data path. Since the inversions are effective corrected or consumed in subsequent stages without adding any additional inverters, the proposed invention results in, for example, approximately 10% energy-efficiency improvement, while also improving data path performance and area.

In the following description, numerous details are discussed to provide a more thorough explanation of embodiments of the present disclosure. It will be apparent, however, to one skilled in the art, that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring embodiments of the present disclosure.

Note that in the corresponding drawings of the embodiments, signals are represented with lines. Some lines may be thicker, to indicate more constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. Such indications are not intended to be limiting. Rather, the lines are used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit or a logical unit. Any represented signal, as dictated by design needs or preferences, may actually comprise one or more signals that may travel in either direction and may be implemented with any suitable type of signal scheme.

Throughout the specification, and in the claims, the term "connected" means a direct connection, such as electrical, mechanical, or magnetic connection between the things that are connected, without any intermediary devices. The term "coupled" means a direct or indirect connection, such as a direct electrical, mechanical, or magnetic connection between the things that are connected or an indirect connection, through one or more passive or active intermediary devices.

The term “adjacent” here generally refers to a position of a thing being next to (e.g., immediately next to or close to with one or more things between them) or adjoining another thing (e.g., abutting it).

The term "circuit" or “module” may refer to one or more passive and/or active components that are arranged to cooperate with one another to provide a desired function.

The term "signal" may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meaning of "a," "an," and "the" include plural references. The meaning of "in" includes "in" and "on."

The term “analog signal” is any continuous signal for which the time varying feature (variable) of the signal is a representation of some other time varying quantity, i.e., analogous to another time varying signal.

The term “digital signal” is a physical signal that is a representation of a sequence of discrete values (a quantified discrete-time signal), for example of an arbitrary bit stream, or of a digitized (sampled and analog-to-digital converted) analog signal.

The term “scaling” generally refers to converting a design (schematic and layout) from one process technology to another process technology and may be subsequently being reduced in layout area. In some cases, scaling also refers to upsizing a design from one process technology to another process technology and may be subsequently increasing layout area. The term “scaling” generally also refers to downsizing or upsizing layout and devices within the same technology node. The term “scaling” may also refer to adjusting (e.g., slowing down or speeding up - i.e. scaling down, or scaling up respectively) of a signal frequency relative to another parameter, for example, power supply level.

The terms “substantially,” “close,” “approximately,” “near,” and “about,” generally refer to being within +/- 10% of a target value.

Unless otherwise specified the use of the ordinal adjectives “first,” “second,” and “third,” etc., to describe a common object, merely indicate that different instances of like objects are being referred to and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking or in any other manner. For the purposes of the present disclosure, phrases “A and/or B” and “A or B” mean (A), (B), or (A and B). For the purposes of the present disclosure, the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).

The terms “left,” “right,” “front,” “back,” “top,” “bottom,” “over,” “under,” and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions.

It is pointed out that those elements of the figures having the same reference numbers (or names) as the elements of any other figure can operate or function in any manner similar to that described but are not limited to such.

For purposes of the embodiments, the transistors in various circuits and logic blocks described here are metal oxide semiconductor (MOS) transistors or their derivatives, where the MOS transistors include drain, source, gate, and bulk terminals. The transistors and/or the MOS transistor derivatives also include Tri-Gate and FinFET transistors, Gate All Around Cylindrical Transistors, Tunneling FET (TFET), Square Wire, or Rectangular Ribbon Transistors, ferroelectric FET (FeFETs), or other devices implementing transistor functionality like carbon nanotubes or spintronic devices. MOSFET symmetrical source and drain terminals i.e., are identical terminals and are interchangeably used here. A TFET device, on the other hand, has asymmetric Source and Drain terminals. Those skilled in the art will appreciate that other transistors, for example, Bi-polar junction transistors (BJT PNP/NPN), BiCMOS, CMOS, etc., may be used without departing from the scope of the disclosure.

Here the term “supervisor” generally refers to a power controller, or power management, unit (a “p-unit”), which monitors and manages power and performance related parameters for one or more associated power domains, either alone or in cooperation with one or more other p-units. Power/performance related parameters may include but are not limited to domain power, platform power, voltage, voltage domain current, die current, load-line, temperature, utilization, clock frequency, processing efficiency, current/future workload information, and other parameters. It may determine new power or performance parameters (limits, average operational, etc.) for the one or more domains. These parameters may then be communicated to supervisee p-units, or directly to controlled or monitored entities such as VR or clock throttle control registers, via one or more fabrics and/or interconnects. A supervisor leams of the workload (present and future) of one or more dies, power measurements of the one or more dies, and other parameters (e.g., platform level power boundaries) and determines new power limits for the one or more dies. These power limits are then communicated by supervisor p-units to the supervisee p-units via one or more fabrics and/or interconnect. In examples where a die has one p-unit, a supervisor (Svor) p-unit is also referred to as supervisor die.

Here the term “supervisee” generally refers to a power controller, or power management, unit (a “p-unit”), which monitors and manages power and performance related parameters for one or more associated power domains, either alone or in cooperation with one or more other p-units and receives instructions from a supervisor to set power and/or performance parameters (e.g., supply voltage, operating frequency, maximum current, throttling threshold, etc.) for its associated power domain. In examples where a die has one p-unit, a supervisee (Svee) p-unit may also be referred to as a supervisee die. Note that a p-unit may serve either as a Svor, a Svee, or both a Svor/Svee p-unit

Here, the term “processor core” generally refers to an independent execution unit that can run one program thread at a time in parallel with other cores. A processor core may include a dedicated power controller or power control unit (p-unit) which can be dynamically or statically configured as a supervisor or supervisee. This dedicated p-unit is also referred to as an autonomous p-unit, in some examples. In some examples, all processor cores are of the same size and functionality i.e., symmetric cores. However, processor cores can also be asymmetric. For example, some processor cores have different size and/or function than other processor cores. A processor core can be a virtual processor core or a physical processor core.

Here the term “die” generally refers to a single continuous piece of semiconductor material (e.g. silicon) where transistors or other components making up a processor core may reside. Multi-core processors may have two or more processors on a single die, but alternatively, the two or more processors may be provided on two or more respective dies. Each die has a dedicated power controller or power control unit (p-unit) power controller or power control unit (p-unit) which can be dynamically or statically configured as a supervisor or supervisee. In some examples, dies are of the same size and functionality i.e., symmetric cores. However, dies can also be asymmetric. For example, some dies have different size and/or function than other dies.

Here, the term “interconnect” refers to a communication link, or channel, between two or more points or nodes. It may comprise one or more separate conduction paths such as wires, vias, waveguides, passive components, and/or active components. It may also comprise a fabric.

Here the term “interface” generally refers to software and/or hardware used to communicate with an interconnect. An interface may include logic and I/O driver/receiver to send and receive data over the interconnect or one or more wires.

Here the term “fabric” generally refers to communication mechanism having a known set of sources, destinations, routing rules, topology and other properties. The sources and destinations may be any type of data handling functional unit such as power management units. Fabrics can be two-dimensional spanning along an x-y plane of a die and/or three-dimensional (3D) spanning along an x-y-z plane of a stack of vertical and horizontally positioned dies. A single fabric may span multiple dies. A fabric can take any topology such as mesh topology, star topology, daisy chain topology. A fabric may be part of a network-on-chip (NoC) with multiple agents. These agents can be any functional unit.

Here the term “dielef ’ or “chiplef ’ generally refers to a physically distinct semiconductor die, typically connected to an adjacent die in a way that allows the fabric across a die boundary to function like a single fabric rather than as two distinct fabrics. Thus at least some dies may be dielets. Each dielet may include one or more p-units which can be dynamically or statically configured as a supervisor, supervisee or both.

Here the term “domain” generally refers to a logical or physical perimeter that has similar properties (e.g., supply voltage, operating frequency, type of circuits or logic, and/or workload type) and/or is controlled by a particular agent. For example, a domain may be a group of logic units or function units that are controlled by a particular supervisor. A domain may also be referred to an Autonomous Perimeter (AP). A domain can be an entire system-on-chip (SoC) or part of the SoC, and is governed by a p-unit.

As used herein, the term “glitch” may refer to a signal that is generated based on spurious switching of logic gate outputs when the inputs arrive at different times. For example, fi the inputs A and B of an XOR gate switch from 0,0 to 1,1 at different times, the output of that XOR gate may momentarily switch to ‘ 1’ and then back to ‘0. ’ This glitch may lead to wasteful power consumption, and in some events may constitute 25% or greater of the total power usage or power loss of a system.

Fig. 1 A is a block diagram illustrating a computing system that implements a hardware accelerator according to one embodiment. The computing system 100 is formed with a processor 110 that includes a memory interface 112. The computing system 100 may be any device or combination of devices, but the description of various embodiments described herein is directed to processing devices and programmable logic devices.

System 100 includes a memory interface 112 and memory 130. In one embodiment, memory interface 112 may be a bus protocol for communication from processor 110 to memory 130. Memory 130 includes a dynamic random-access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, or other memory device. Memory 130 stores instructions and/or data represented by data signals that are to be executed by the processor 110. The processor 110 is coupled to the memory 130 via a processor bus 120. A system logic chip, such as a memory controller hub (MCH) may be coupled to the processor bus 120 and memory 130. An MCH can provide a high bandwidth memory path to memory 130 for instruction and data storage and for storage of graphics commands, data and textures. The MCH can be used to direct data signals between the processor 110, memory 130, and other components in the system 100 and to bridge the data signals between processor bus 120, memory 130, and system I/O, for example. The MCH may be coupled to memory 130 through a memory interface (e.g., memory interface 112). In some embodiments, the system logic chip can provide a graphics port for coupling to a graphics controller through an Accelerated Graphics Port (AGP) interconnect. The system 100 may also include an I/O controller hub (ICH). The ICH can provide direct connections to some I/O devices via a local I/O bus. The local I/O bus is a high-speed I/O bus for connecting peripherals to the memory 130, chipset, and processor 110. Some examples are the audio controller, firmware hub (flash BIOS), wireless transceiver, data storage, legacy I/O controller containing user input and keyboard interfaces, a serial expansion port such as Universal Serial Bus (USB), and a network controller. The data storage device can include a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.

System 100 is representative of processing systems based on the PENTIUM III™, PENTIUM 4™, Xeon™, Itanium, XS cal e™ and/or StrongARM™ microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and the like) may also be used. In one embodiment, system 100 executes a version of the WINDOWS™ operating system available from Microsoft Corporation of Redmond, Washington, although other operating systems (UNIX and Linux for example), embedded software, and/or graphical user interfaces, may also be used. Thus, embodiments of the present disclosure are not limited to any specific combination of hardware circuitry and software.

Embodiments described herein are not limited to computer systems. Alternative embodiments of the present disclosure may be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications can include a micro controller, a digital signal processor (DSP), system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform one or more instructions in accordance with at least one embodiment.

Processor 110 may include one or more execution units. One embodiment may be described in the context of a single processor desktop or server system, but alternative embodiments may be included in a multiprocessor system. System 100 may be an example of a ‘hub’ system architecture. The computer system 100 includes a processor 110 to process data signals. The processor 110, as one illustrative example, includes a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, for example. The processor 110 is coupled to a processor bus 120 that transmits data signals between the processor 110 and other components in the system 100. Other elements of system 100 may include a graphics accelerator, memory controller hub, I/O controller hub, wireless transceiver, Flash BIOS, Network controller, Audio controller, Serial expansion port, I/O controller, etc.

In one embodiment, the processor 110 includes a Level 1 (LI) internal cache memory. Depending on the architecture, the processor 110 may have a single internal cache or multiple levels of internal caches. Other embodiments include a combination of both internal and external caches depending on the particular implementation and needs.

For another embodiment of a system, a Bitcoin mining hardware accelerator may be included on a system on a chip (SoC). One embodiment of an SoC includes a processor and a memory. The memory of the SoC may be a flash memory. The flash memory can be located on the same die as the processor and other system components. Additionally, other logic blocks such as a memory controller or graphics controller can also be located on an SoC. System 100 includes a logic device (LD) 101 operatively coupled to the processor 110. LD may be a programmable logic device (PLD) or a non-programmable logic device. In one embodiment, LD 101 may be a field-programmable gate array (FPGA). In other embodiments, LD 101 may be an Application Specific Integrated Circuit (ASIC), complex programmable logic device, Generic array logic, programmable logic array, or other type of LD. In one embodiment, processor 110 and LD 101 may be included on a single circuit board, each in their respective locations.

LD 101 is an integrated circuit used to build reconfigurable and/or non- reconfigurable digital circuits. The LD 101 can be an electronic component used in connection with other components or other integrated circuits, such as processor 110. In general, PLDs can have undefined functions at the time of manufacturing and can be programmed or reconfigured before use. The LD 101 can be a combination of a logic device and a memory device. The memory of the LD 101 can store a pattern that was given to the integrated circuit during programming. Data can be stored in the integrated circuit using various technologies, such as anti-fuses, Static Random- Access Memory (SRAM), EPROM cells, EEPROM cells, flash memory, or the like. The LD 101 can use any type of logic device technology. In one embodiment, LD 101 includes hardware accelerator 111 to perform the optimized digital currency mining operations described herein.

Fig. IB illustrates a high-level bitcoin mining accelerator organization, in accordance with some embodiments. In one embodiment, the Bitcoin mining process starts with a 1024-bit message at 200. The message 200 may include a 32-bit version 201, 256- bit hash 202 from the previous block, 256-bit Merkle root 203 of the transaction, 32-bit time stamp 204, 32-bit target value 205, 32-bit nonce 206 and a 384-bit padding 207. The 1024-bit message is compressed by a bitcoin mining engine 250 using two stages of 64- round SHA-256 to generate a 256-bit hash 208. This hash 208 is padded with a 256-bit constant 209 and is compressed again to obtain the final 256-bit hash 210.

The process of mining may involve identifying a nonce for a given header, which generates a final hash that is less than a pre-defined target value. This identification may be achieved by looking for a minimum number of leading zeros that would ensure the hash to be smaller than the target. The target, and hence the leading zero requirement, may change depending on the rate of new block creation to maintain the rate at approximately one block every ten minutes. Decreasing the target may decrease the probability of finding a valid hash and hence increase the overall search space to generate a new block for the chain. In one embodiment, for a given header, the Bitcoin mining hardware accelerator traverses the search space of 2 32 options to potentially find a valid nonce. If no valid nonce is found, the Merkle root may be changed by choosing a different set of pending transactions and starting over with the nonce search. The SHA256 Stage-0 is performed once per Merkle root and can be implemented either in a one-time hashing hardware accelerator or in software. The nonce space exploration in SHA256 Stage- 1 and Stage-2 are implemented as fully unrolled 64 rounds of SHA256 message digest and parallel message expansion logic. In one example, the three stages of hashing may be implemented as fully unrolled 64 rounds of SHA256 message digest and parallel message expansion logic. The computation intensive SHA-256 hashing may be the major contributor to the energy consumption in a Bitcoin mining accelerator.

Fig. 2 illustrates a conventional 3-phase latch-based clocking apparatus at 300 that includes a first clock transmitting at a first latch stage i at 305, a second clock transmitting at a second latch stage i + 1 at 310, and a third clock transmitting at a third latch stage i +2 at 315. A conventional fully -unrolled SHA256 datapath with 3-phase clocking is shown at 320 in Fig. 2. The 3 non-overlapping clock phases have a period of 1.5*T, where T is the clock period in a flop-based design. The datapath at 320 is depicted with each of the vertical dashed lines representing a time period (or “latch stage”) of 0.25 x T. The clock phases (e.g., the latch stages of each of the clocks) are also separated by 0.25*T from each other, providing an inherent min-delay or hold-time margin. However, this separation also results in a dead-time of 0.25 xT as can be seen in the grey-shaded portions at 325 in each pipeline stage limiting the overall hash throughput to 0.667/T.

Fig. 3 illustrates an example 3-phase latch-based clocking scheme for capturing and launching data, in accordance with various embodiments. Clkl 400 may be used for latching data from Round ‘i’ 415 in the registers (A through H) before the data is launched to the message digest logic. Clk2 405 may be used for capturing data from the message digest logic and for latching data from Round ‘i+1 ’ 420 in the registers (A through H) before the data is launched to the next message digest logic. The launch and capture of the data occur on different clocks. Clk3 410 may then be used for capturing data received from the next message digest logic. Fig. 4 illustrates an example circuit 500 for generating Clkl 400, Clk2 405, and Clk3 410. A fully -unrolled SHA256 data path for bitcoin mining may include 120-deep pipelined data paths for the message digest and message scheduler operations. A flip-flopbased design may implement one round of SHA operation per pipeline stage, each comprising of 256 flops in the digest and 512 flops in the scheduler. A latch-based pipeline design replaces flip-flops with latches, which are enabled by a 3-phase nonoverlapping clock.

The circuit 500 may include six flip-flops, a NOR gate, and three AND gates coupled as shown. The 2x clock (clk_2x) distributed globally in the ASIC is input to the clock inputs of the flip-flops. The output of the AND gates are non-overlapping clocks Clkl 400, Clk2405, and Clk3 410. Clocks Clkl 400, Clk2405, and Clk3 410 have same frequencies.

Fig. 5 illustrates a scheme for distribution of the 3-phase clock. A 3-phase clock implementation may use a global clock generation logic (e.g., circuit 500 of Fig. 4) to generate the 3 non-overlapping clocks using a global 2x frequency clock. Since two successive pipeline stages operate on different clocks, the point of clock path divergence is at the clock generation logic (CG), e.g. circuit 500. This increases the impact of OCV on clock skew, affecting both maximum frequency (Fmax) and min-delay yield. The effect of OCV may be further magnified by the typical operating voltage of mining ASICs in the range of 300-400 millivolts (mV). Each clock line (or interconnect) may have multiple buffers or inverters (not shown).

Fig. 6 illustrates a scheme with local clock generation using global forwarded clock with local suppression, in accordance with some embodiments. Some embodiments use localized clock generation (e.g., circuit 500) for the digest and scheduler data paths. In some embodiments, the 2x frequency clock is forwarded from one round to the next along with the data. A local clock generation logic at each pipeline stage suppresses two out of three clock pulses to generate the appropriate derived clock - clkl, clk2, clk3. Since the 3- phase clocks are derived locally, the point of divergence is closer to the pipeline stages, reducing the impact of OCV by, for example, approximately 95%. In some embodiments, the local clock generation logic (CG), e.g. circuit 500, is placed for every round. In some embodiments, the local clock generation logic is placed for every ‘x’ rounds (e.g., every 10 rounds). Fig. 7 illustrates the generation of the enable signal that is used for local clockpulse suppression logic, in accordance with some embodiments. The enable generation logic 700 comprises three flip-flops and NOR gate coupled as shown. The input clock 705 is the 2x clock (Clk_2x), and the enable signal 710 is used to generate Clkl, Clk2, and Clk3. For example, three different enables are generated for the three different clocks. The enable signal is daisy-chained through the pipeline stages in the same direction as the 2x clock and data. The pulse suppression logic uses a global enable signal that is used to gate the clock locally at each pipeline stage.

Fig. 8 illustrates variation tolerant latch-based clocking apparatus, in accordance with some embodiments. As discussed herein, the pulse suppression logic uses a global enable signal that is used to gate the clock locally at each pipeline stage. The enable signal ‘en’ is daisy chained through the pipeline stages and used in the pulse suppression logic to selectively enable one out of three pulses of the 2x frequency clock. The cost of local pulse suppression logic is amortized across the 768-latches in the digest and scheduler that are enabled by the generated clock and hence result in negligible overhead. Since the order of clock pulses is clkl clk2, the daisy chain passing the enable signal is also routed from pipeline stage- 1 to stage-3; back to stage-2 and then on to stage-4 and so on. This eliminates the need for an additional flop in the pulse suppression logic. In some embodiments, scheduler runs at half the frequency of the digest data path. In that case, there is a divider-by-two circuit in each of the scheduler, which takes the locally generated clkl/clk2/clk3 as input.

In some embodiments, an optimized data path is used for both digest and scheduler, where output inversion at each Boolean function and carry-save adder tree are removed, resulting in inverted outputs at each stage. The functional inversions are corrected or accounted for in the subsequent stages of the data path, generating the expected final hash output. Removing functional inversions allows significant reduction in the number of inverters used in the data path. Since the inversions are effective corrected or consumed in subsequent stages without adding any additional inverters, embodiments herein may result in, for example, approximately 10% energy-efficiency improvement, while also improving data path performance and area.

Fig. 9 illustrates an optimized performance critical message digest data path 900, in accordance with some embodiments. Each round in the single SHA-256 message digest may combine eight 32-bit words known as states At through Hi along with a 32-bit message, Wi, and a 32-bit round constant, Ki, to generate two new 32-bit states Ai+i and Ei+i. The new states Bi+i through Di+i may be equal to A, through C ; , and Fi+i through Hi t may be equal to Et through G ; . The critical path for both At+i and Et+i may be identical and may include four Carry Save Adders (CSA) followed by a Completion Adder (CA). This may equate to approximately 19 logic gate levels. Here, “Maj” refers to a majority gate that performs a majority function of the inputs. Here, “Ch” refers to a multiplexer or choice function.

The two operations performed in SHA256 algorithm are the message digest and message scheduler. The data path operates on a 256-bit input represented as 8x32b words A through H. Boolean functions Xo, Xi, Majority and Ch (change or multiplexer), followed by 32b modular addition generate a new value of A and E in each cycle. At the same time, a new message word (W) from the scheduler is added to a round constant (K) and two digest words (G, B) to compute anew value of H in carry-save format.

Fig. 10A illustrates a pass-gate implementation 1000A of sigma functions Xo, Xi that performs a 3-way XOR operation on rotated version of A and E, respectively. The 3- way XOR function uses an inversion at the output to generate the exact values of Xo and Xi. The sigma functions take rotated (Rot) versions of A and/or E and XOR them. For instance, Xo XORs RotRight(A,2), RotRight(A, 13) and RotRight(A,22). The rotations are realized by interleaving the 32 bits of A and are just interconnects, in accordance with some embodiments. As can be seen at 1000B, in some embodiments the circuit may be optimized by removing the output inverter to generate inverted values of Xo and Xi, X o and ; .

Fig. 10B illustrates a majority gate 1000C. The majority gate 1000C performs a majority function of the inputs A, B, and C, and provides an output “Maj.” As may be seen at 1000D, the output inverter can be eliminated to optimize the area of the majority gate. As such, the output is an inverted version of Maj, Maj.

Fig. 10C illustrates a change (Ch) or multiplexer function 1000E. Here, input ‘A’ is the select input and inputs B and C are the inputs to the multiplexer, one of which is selected by the control or select input A. The Ch function implements as 2:1 multiplexer function. Like the embodiments of Figs. 10A or 10B, the inverter at the output of Ch function can also be removed as seen at 1000F. As such, the output Ch is an inverted version of Ch, Ch. Fig. 11 illustrates a CSA 1100A and an inverted version of a CSA 1100B. CSA generates a sum of inputs A, B, and C, and generates an output Sum and Carry. The output inverters of CSA can also be removed to reduce area. As such, the Carry and Sum outputs are inverted versions Carry and Sum.

Fig. 12 illustrates an optimized data path 1200 using an inverted logic, in accordance with some embodiments. Here, the bar over a signal name represents an inversion. For example, A with a bar on top is an inverted version of A as described above with respect to Figs. 10A, 10B, 10C, or 11. The inversions in the intermediate Boolean functions and the CSA may be consumed in subsequent logic, assisted by appropriate inversion of some of the input digest state words. For instance, words A, B, C, Hs and He are inverted, while E, F, G are still represented in the non-inverted form. For example, by inverting the inputs to the inverted Majority function, the correct value of Majority is computed. Inverted value for input state word A is generated by using an XNOR gate (instead of XOR) for sum generation in the 32b carry completion adder in the previous round.

Similarly, the 32b completion adder consuming inverted versions of S4 and C4 may be modified to consume the inversion in the first stage of propagate and generate signal generations:

Conventional:

Propagate signal, P = A xor B Generate signal, G = A and B Modified:

Modified propagate signal, P = ~A xor’ ~B, where xor’ is an inverted 2-input XOR Modified generate signal, G = ~A nor ~B

Fig. 13 illustrates a message schedule data path 1300 that operates on 512-bit message, organized as 16x32-bit words and involves 3-way XOR functions oO, ol, followed by CSAs and 32b completion adder. The 3-way XOR function uses an inversion at the output to generate the exact values of oo and oi.. The sigma functions take rotated (Rot) versions of A and/or E and XOR them. For instance, oo XORs RotRight(A,2), RotRight(A,13) and RotRight(A,22). The rotations are realized by interleaving the 32 bits of A and are just interconnects, in accordance with some embodiments.

Fig. 14 illustrates an optimized message scheduler data path 1400 with logic inversion, in accordance with some embodiments. Similar to the message digest, the message scheduler data path is also optimized with inverted Boolean and CSA cells. In this embodiment, the inversion may be represented by bars over various values such as S x and C x as described above.

Fig. 15 illustrates an inverted XNOR gate 1500, in accordance with some embodiments. To compensate for logic inversion, the oo function of Fig. 14 is implemented as 3-way XNOR with inverted output, cr' o .

Fig. 16 illustrates a smart device or a computer system or a SoC (System-on-Chip) including variation tolerant latch-based clocking apparatus and/or energy efficient bitcoin mining accelerator, in accordance with some embodiments. It is pointed out that those elements of Fig. 16 having the same reference numbers (or names) as the elements of any other figure may operate or function in any manner similar to that described, but are not limited to such.

In some embodiments, device 5500 represents an appropriate computing device, such as a computing tablet, a mobile phone or smart-phone, a laptop, a desktop, an Intemet-of-Things (IOT) device, a server, a wearable device, a set-top box, a wireless- enabled e-reader, or the like. It will be understood that certain components are shown generally, and not all components of such a device are shown in device 5500.

In an example, the device 5500 comprises an SoC (System-on-Chip) 5501. An example boundary of the SoC 5501 is illustrated using dotted lines in Fig. 16, with some example components being illustrated to be included within SoC 5501 - however, SoC 5501 may include any appropriate components of device 5500.

In some embodiments, device 5500 includes processor 5504. Processor 5504 can include one or more physical devices, such as microprocessors, application processors, microcontrollers, programmable logic devices, processing cores, or other processing means. The processing operations performed by processor 5504 include the execution of an operating platform or operating system on which applications and/or device functions are executed. The processing operations include operations related to I/O (input/output) with a human user or with other devices, operations related to power management, operations related to connecting computing device 5500 to another device, and/or the like. The processing operations may also include operations related to audio I/O and/or display I/O.

In some embodiments, processor 5504 includes multiple processing cores (also referred to as cores) 5508a, 5508b, 5508c. Although merely three cores 5508a, 5508b, 5508c are illustrated in Fig. 16, processor 5504 may include any other appropriate number of processing cores, e.g., tens, or even hundreds of processing cores. Processor cores 5508a, 5508b, 5508c may be implemented on a single integrated circuit (IC) chip. Moreover, the chip may include one or more shared and/or private caches, buses or interconnections, graphics and/or memory controllers, or other components.

In some embodiments, processor 5504 includes cache 5506. In an example, sections of cache 5506 may be dedicated to individual cores 5508 (e.g., a first section of cache 5506 dedicated to core 5508a, a second section of cache 5506 dedicated to core 5508b, and so on). In an example, one or more sections of cache 5506 may be shared among two or more of cores 5508. Cache 5506 may be split in different levels, e.g., level 1 (LI) cache, level 2 (L2) cache, level 3 (L3) cache, etc.

In some embodiments, processor core 5504 may include a fetch unit to fetch instructions (including instructions with conditional branches) for execution by the core 5504. The instructions may be fetched from any storage devices such as the memory 5530. Processor core 5504 may also include a decode unit to decode the fetched instruction. For example, the decode unit may decode the fetched instruction into a plurality of microoperations. Processor core 5504 may include a schedule unit to perform various operations associated with storing decoded instructions. For example, the schedule unit may hold data from the decode unit until the instructions are ready for dispatch, e.g., until all source values of a decoded instruction become available. In one embodiment, the schedule unit may schedule and/or issue (or dispatch) decoded instructions to an execution unit for execution.

The execution unit may execute the dispatched instructions after they are decoded (e.g., by the decode unit) and dispatched (e.g., by the schedule unit). In an embodiment, the execution unit may include more than one execution unit (such as an imaging computational unit, a graphics computational unit, a general-purpose computational unit, etc.). The execution unit may also perform various arithmetic operations such as addition, subtraction, multiplication, and/or division, and may include one or more an arithmetic logic units (ALUs). In an embodiment, a co-processor (not shown) may perform various arithmetic operations in conjunction with the execution unit.

Further, execution unit may execute instructions out-of-order. Hence, processor core 5504 may be an out-of-order processor core in one embodiment. Processor core 5504 may also include a retirement unit. The retirement unit may retire executed instructions after they are committed. In an embodiment, retirement of the executed instructions may result in processor state being committed from the execution of the instructions, physical registers used by the instructions being de-allocated, etc. Processor core 5504 may also include a bus unit to enable communication between components of processor core 5504 and other components via one or more buses. Processor core 5504 may also include one or more registers to store data accessed by various components of the core 5504 (such as values related to assigned app priorities and/or sub-system states (modes) association.

In some embodiments, device 5500 comprises connectivity circuitries 5531. For example, connectivity circuitries 5531 includes hardware devices (e.g., wireless and/or wired connectors and communication hardware) and/or software components (e.g., drivers, protocol stacks), e.g., to enable device 5500 to communicate with external devices. Device 5500 may be separate from the external devices, such as other computing devices, wireless access points or base stations, etc.

In an example, connectivity circuitries 5531 may include multiple different types of connectivity. To generalize, the connectivity circuitries 5531 may include cellular connectivity circuitries, wireless connectivity circuitries, etc. Cellular connectivity circuitries of connectivity circuitries 5531 refers generally to cellular network connectivity provided by wireless carriers, such as provided via GSM (global system for mobile communications) or variations or derivatives, CDMA (code division multiple access) or variations or derivatives, TDM (time division multiplexing) or variations or derivatives, 3rd Generation Partnership Project (3GPP) Universal Mobile Telecommunications Systems (UMTS) system or variations or derivatives, 3GPP Long-Term Evolution (LTE) system or variations or derivatives, 3GPP LTE-Advanced (LTE-A) system or variations or derivatives, Fifth Generation (5G) wireless system or variations or derivatives, 5G mobile networks system or variations or derivatives, 5GNew Radio (NR) system or variations or derivatives, or other cellular service standards. Wireless connectivity circuitries (or wireless interface) of the connectivity circuitries 5531 refers to wireless connectivity that is not cellular, and can include personal area networks (such as Bluetooth, Near Field, etc.), local area networks (such as Wi-Fi), and/or wide area networks (such as WiMax), and/or other wireless communication. In an example, connectivity circuitries 5531 may include a network interface, such as a wired or wireless interface, e.g., so that a system embodiment may be incorporated into a wireless device, for example, a cell phone or personal digital assistant. In some embodiments, device 5500 comprises control hub 5532, which represents hardware devices and/or software components related to interaction with one or more I/O devices. For example, processor 5504 may communicate with one or more of display 5522, one or more peripheral devices 5524, storage devices 5528, one or more other external devices 5529, etc., via control hub 5532. Control hub 5532 may be a chipset, a Platform Control Hub (PCH), and/or the like.

For example, control hub 5532 illustrates one or more connection points for additional devices that connect to device 5500, e.g., through which a user might interact with the system. For example, devices (e.g., devices 5529) that can be attached to device 5500 include microphone devices, speaker or stereo systems, audio devices, video systems or other display devices, keyboard or keypad devices, or other I/O devices for use with specific applications such as card readers or other devices.

As mentioned above, control hub 5532 can interact with audio devices, display 5522, etc. For example, input through a microphone or other audio device can provide input or commands for one or more applications or functions of device 5500. Additionally, audio output can be provided instead of, or in addition to display output. In another example, if display 5522 includes a touch screen, display 5522 also acts as an input device, which can be at least partially managed by control hub 5532. There can also be additional buttons or switches on computing device 5500 to provide I/O functions managed by control hub 5532. In one embodiment, control hub 5532 manages devices such as accelerometers, cameras, light sensors or other environmental sensors, or other hardware that can be included in device 5500. The input can be part of direct user interaction, as well as providing environmental input to the system to influence its operations (such as filtering for noise, adjusting displays for brightness detection, applying a flash for a camera, or other features).

In some embodiments, control hub 5532 may couple to various devices using any appropriate communication protocol, e.g., PCIe (Peripheral Component Interconnect Express), USB (Universal Serial Bus), Thunderbolt, High Definition Multimedia Interface (HDMI), Firewire, etc.

In some embodiments, display 5522 represents hardware (e.g., display devices) and software (e.g., drivers) components that provide a visual and/or tactile display for a user to interact with device 5500. Display 5522 may include a display interface, a display screen, and/or hardware device used to provide a display to a user. In some embodiments, display 5522 includes a touch screen (or touch pad) device that provides both output and input to a user. In an example, display 5522 may communicate directly with the processor 5504. Display 5522 can be one or more of an internal display device, as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., DisplayPort, etc.). In one embodiment display 5522 can be ahead mounted display (HMD) such as a stereoscopic display device for use in virtual reality (VR) applications or augmented reality (AR) applications.

In some embodiments, and although not illustrated in the figure, in addition to (or instead ol) processor 5504, device 5500 may include Graphics Processing Unit (GPU) comprising one or more graphics processing cores, which may control one or more aspects of displaying contents on display 5522.

Control hub 5532 (or platform controller hub) may include hardware interfaces and connectors, as well as software components (e.g., drivers, protocol stacks) to make peripheral connections, e.g., to peripheral devices 5524.

It will be understood that device 5500 could both be a peripheral device to other computing devices, as well as have peripheral devices connected to it. Device 5500 may have a “docking” connector to connect to other computing devices for purposes such as managing (e.g., downloading and/or uploading, changing, synchronizing) content on device 5500. Additionally, a docking connector can allow device 5500 to connect to certain peripherals that allow computing device 5500 to control content output, for example, to audiovisual or other systems.

In addition to a proprietary docking connector or other proprietary connection hardware, device 5500 can make peripheral connections via common or standards-based connectors. Common types can include a Universal Serial Bus (USB) connector (which can include any of a number of different hardware interfaces), DisplayPort including MiniDisplayPort (MDP), High Definition Multimedia Interface (HDMI), Firewire, or other types.

In some embodiments, connectivity circuitries 5531 may be coupled to control hub 5532, e.g., in addition to, or instead of, being coupled directly to the processor 5504. In some embodiments, display 5522 may be coupled to control hub 5532, e.g., in addition to, or instead of, being coupled directly to processor 5504. In some embodiments, device 5500 comprises memory 5530 coupled to processor 5504 via memory interface 5534. Memory 5530 includes memory devices for storing information in device 5500.

In some embodiments, memory 5530 includes apparatus to maintain stable clocking as described with reference to various embodiments. Memory can include nonvolatile (state does not change if power to the memory device is interrupted) and/or volatile (state is indeterminate if power to the memory device is interrupted) memory devices. Memory device 5530 can be a dynamic random-access memory (DRAM) device, a static random-access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In one embodiment, memory 5530 can operate as system memory for device 5500, to store data and instructions for use when the one or more processors 5504 executes an application or process. Memory 5530 can store application data, user data, music, photos, documents, or other data, as well as system data (whether long-term or temporary) related to the execution of the applications and functions of device 5500.

Elements of various embodiments and examples are also provided as a machine- readable medium (e.g., memory 5530) for storing the computer-executable instructions (e.g., instructions to implement any other processes discussed herein). The machine- readable medium (e.g., memory 5530) may include, but is not limited to, flash memory, optical disks, CD-ROMs, DVD ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, phase change memory (PCM), or other types of machine-readable media suitable for storing electronic or computer-executable instructions. For example, embodiments of the disclosure may be downloaded as a computer program (e.g., BIOS) which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals via a communication link (e.g., a modem or network connection).

In some embodiments, device 5500 comprises temperature measurement circuitries 5540, e.g., for measuring temperature of various components of device 5500. In an example, temperature measurement circuitries 5540 may be embedded, or coupled or attached to various components, whose temperature are to be measured and monitored. For example, temperature measurement circuitries 5540 may measure temperature of (or within) one or more of cores 5508a, 5508b, 5508c, voltage regulator 5514, memory 5530, a mother-board of SoC 5501, and/or any appropriate component of device 5500. In some embodiments, temperature measurement circuitries 5540 include a low power hybrid reverse (LPHR) bandgap reference (BGR) and digital temperature sensor (DTS), which utilizes subthreshold metal oxide semiconductor (MOS) transistor and the PNP parasitic Bi-polar Junction Transistor (BJT) device to form a reverse BGR that serves as the base for configurable BGR or DTS operating modes. The LPHR architecture uses low-cost MOS transistors and the standard parasitic PNP device. Based on a reverse bandgap voltage, the LPHR can work as a configurable BGR. By comparing the configurable BGR with the scaled base-emitter voltage, the circuit can also perform as a DTS with a linear transfer function with single-temperature trim for high accuracy.

In some embodiments, device 5500 comprises power measurement circuitries 5542, e.g., for measuring power consumed by one or more components of the device 5500. In an example, in addition to, or instead of, measuring power, the power measurement circuitries 5542 may measure voltage and/or current. In an example, the power measurement circuitries 5542 may be embedded, or coupled or attached to various components, whose power, voltage, and/or current consumption are to be measured and monitored. For example, power measurement circuitries 5542 may measure power, current and/or voltage supplied by one or more voltage regulators 5514, power supplied to SoC 5501, power supplied to device 5500, power consumed by processor 5504 (or any other component) of device 5500, etc.

In some embodiments, device 5500 comprises one or more voltage regulator circuitries, generally referred to as voltage regulator (VR) 5514. VR 5514 generates signals at appropriate voltage levels, which may be supplied to operate any appropriate components of the device 5500. Merely as an example, VR 5514 is illustrated to be supplying signals to processor 5504 of device 5500. In some embodiments, VR 5514 receives one or more Voltage Identification (VID) signals, and generates the voltage signal at an appropriate level, based on the VID signals. Various type of VRs may be utilized for the VR 5514. For example, VR 5514 may include a “buck” VR, “boost” VR, a combination of buck and boost VRs, low dropout (LDO) regulators, switching DC-DC regulators, constant-on-time controller-based DC-DC regulator, etc. Buck VR is generally used in power delivery applications in which an input voltage needs to be transformed to an output voltage in a ratio that is smaller than unity. Boost VR is generally used in power delivery applications in which an input voltage needs to be transformed to an output voltage in a ratio that is larger than unity. In some embodiments, each processor core has its own VR, which is controlled by PCU 5510a/b and/or PMIC 5512. In some embodiments, each core has a network of distributed LDOs to provide efficient control for power management. The LDOs can be digital, analog, or a combination of digital or analog LDOs. In some embodiments, VR 5514 includes current tracking apparatus to measure current through power supply rail(s).

In some embodiments, VR 5514 includes a digital control scheme to manage states of a proportional-integral-derivative (PID) filter (also known as a digital Type-Ill compensator). The digital control scheme controls the integrator of the PID filter to implement non-linear control of saturating the duty cycle during which the proportional and derivative terms of the PID are set to 0 while the integrator and its internal states (previous values or memory) is set to a duty cycle that is the sum of the current nominal duty cycle plus a deltaD. The deltaD is the maximum duty cycle increment that is used to regulate a voltage regulator from ICCmin to ICCmax and is a configuration register that can be set post silicon. A state machine moves from a non-linear all ON state (which brings the output voltage Vout back to a regulation window) to an open loop duty cycle which maintains the output voltage slightly higher than the required reference voltage Vref. After a certain period in this state of open loop at the commanded duty cycle, the state machine then ramps down the open loop duty cycle value until the output voltage is close to the Vref commanded. As such, output chatter on the output supply from VR 5514 is completely eliminated (or substantially eliminated) and there is merely a single undershoot transition which could lead to a guaranteed Vmin based on a comparator delay and the di/dt of the load with the available output decoupling capacitance.

In some embodiments, VR 5514 includes a separate self-start controller, which is functional without fuse and/or trim information. The self-start controller protects VR 5514 against large inrush currents and voltage overshoots, while being capable of following a variable VID (voltage identification) reference ramp imposed by the system. In some embodiments, the self-start controller uses a relaxation oscillator built into the controller to set the switching frequency of the buck converter. The oscillator can be initialized using either a clock or current reference to be close to a desired operating frequency. The output of VR 5514 is coupled weakly to the oscillator to set the duty cycle for closed loop operation. The controller is naturally biased such that the output voltage is always slightly higher than the set point, eliminating the need for any process, voltage, and/or temperature (PVT) imposed trims. In some embodiments, VR 5514 includes a controlled current source or a parallel current source (PCS) to assist a DC-DC buck converter and to alleviate the stress on the C4 bumps while boosting the efficiency of the DC-DC converter at the high-load current scenarios. The PSC adds current to the output power supply rail, which is coupled to a load. In some embodiments, the PCS is activated to mitigate droop events due to high di/dt events on the output power supply rail. The PCS provides charge directly to the load (driving in parallel to the DC-DC converter) whenever the current supplied by the DC-DC converter is above a certain threshold level.

In some embodiments, device 5500 comprises one or more clock generator circuitries, generally referred to as clock generator 5516. Clock generator 5516 generates clock signals at appropriate frequency levels, which may be supplied to any appropriate components of device 5500. Merely as an example, clock generator 5516 is illustrated to be supplying clock signals to processor 5504 of device 5500. In some embodiments, clock generator 5516 receives one or more Frequency Identification (FID) signals, and generates the clock signals at an appropriate frequency, based on the FID signals.

In some embodiments, device 5500 comprises battery 5518 supplying power to various components of device 5500. Merely as an example, battery 5518 is illustrated to be supplying power to processor 5504. Although not illustrated in the figures, device 5500 may comprise a charging circuitry, e.g., to recharge the battery, based on Alternating Current (AC) power supply received from an AC adapter.

In some embodiments, the charging circuitry (e.g., 5518) comprises a buck-boost converter. This buck-boost converter comprises DrMOS or DrGaN devices used in place of half-bridges for traditional buck-boost converters. Various embodiments here are described with reference to DrMOS. However, the embodiments are applicable to DrGaN. The DrMOS devices allow for better efficiency in power conversion due to reduced parasitic and optimized MOSFET packaging. Since the dead-time management is internal to the DrMOS, the dead-time management is more accurate than for traditional buck-boost converters leading to higher efficiency in conversion. Higher frequency of operation allows for smaller inductor size, which in turn reduces the z-height of the charger comprising the DrMOS based buck-boost converter. The buck-boost converter of various embodiments comprises dual-folded bootstrap for DrMOS devices. In some embodiments, in addition to the traditional bootstrap capacitors, folded bootstrap capacitors are added that cross-couple inductor nodes to the two sets of DrMOS switches. In some embodiments, device 5500 comprises Power Control Unit (PCU) 5510 (also referred to as Power Management Unit (PMU), Power Management Controller (PMC), Power Unit (p-unit), etc.). In an example, some sections of PCU 5510 may be implemented by one or more processing cores 5508, and these sections of PCU 5510 are symbolically illustrated using a dotted box and labelled PCU 5510a. In an example, some other sections of PCU 5510 may be implemented outside the processing cores 5508, and these sections of PCU 5510 are symbolically illustrated using a dotted box and labelled as PCU 5510b. PCU 5510 may implement various power management operations for device 5500. PCU 5510 may include hardware interfaces, hardware circuitries, connectors, registers, etc., as well as software components (e.g., drivers, protocol stacks), to implement various power management operations for device 5500.

In various embodiments, PCU or PMU 5510 is organized in a hierarchical manner forming a hierarchical power management (HPM). HPM of various embodiments builds a capability and infrastructure that allows for package level management for the platform, while still catering to islands of autonomy that might exist across the constituent die in the package. HPM does not assume a pre-determined mapping of physical partitions to domains. An HPM domain can be aligned with a function integrated inside a dielet, to a dielet boundary, to one or more dielets, to a companion die, or even a discrete CXL device. HPM addresses integration of multiple instances of the same die, mixed with proprietary functions or 3rd party functions integrated on the same die or separate die, and even accelerators connected via CXL (e.g., Flexbus) that may be inside the package, or in a discrete form factor.

HPM enables designers to meet the goals of scalability, modularity, and late binding. HPM also allows PMU functions that may already exist on other dice to be leveraged, instead of being disabled in the flat scheme. HPM enables management of any arbitrary collection of functions independent of their level of integration. HPM of various embodiments is scalable, modular, works with symmetric multi-chip processors (MCPs), and works with asymmetric MCPs. For example, HPM does not need a signal PM controller and package infrastructure to grow beyond reasonable scaling limits. HPM enables late addition of a die in a package without the need for change in the base die infrastructure. HPM addresses the need of disaggregated solutions having dies of different process technology nodes coupled in a single package. HPM also addresses the needs of companion die integration solutions — on and off package. Other technical effects will be evident from the various figures and embodiments.

In some embodiments, device 5500 comprises Power Management Integrated Circuit (PMIC) 5512, e.g., to implement various power management operations for device 5500. In some embodiments, PMIC 5512 is a Reconfigurable Power Management ICs (RPMICs) and/or an IMVP (Intel® Mobile Voltage Positioning). In an example, the PMIC is within an IC die separate from processor 5504. The may implement various power management operations for device 5500. PMIC 5512 may include hardware interfaces, hardware circuitries, connectors, registers, etc., as well as software components (e.g., drivers, protocol stacks), to implement various power management operations for device 5500.

In an example, device 5500 comprises one or both PCU 5510 or PMIC 5512. In an example, any one of PCU 5510 or PMIC 5512 may be absent in device 5500, and hence, these components are illustrated using dotted lines.

Various power management operations of device 5500 may be performed by PCU 5510, by PMIC 5512, or by a combination of PCU 5510 and PMIC 5512. For example, PCU 5510 and/or PMIC 5512 may select a power state (e.g., P-state) for various components of device 5500. For example, PCU 5510 and/or PMIC 5512 may select a power state (e.g., in accordance with the ACPI (Advanced Configuration and Power Interface) specification) for various components of device 5500. Merely as an example, PCU 5510 and/or PMIC 5512 may cause various components of the device 5500 to transition to a sleep state, to an active state, to an appropriate C state (e.g., CO state, or another appropriate C state, in accordance with the ACPI specification), etc. In an example, PCU 5510 and/or PMIC 5512 may control a voltage output by VR 5514 and/or a frequency of a clock signal output by the clock generator, e.g., by outputting the VID signal and/or the FID signal, respectively. In an example, PCU 5510 and/or PMIC 5512 may control battery power usage, charging of battery 5518, and features related to power saving operation.

The clock generator 5516 can comprise a phase locked loop (PLL), frequency locked loop (FLL), or any suitable clock source. In some embodiments, each core of processor 5504 has its own clock source. As such, each core can operate at a frequency independent of the frequency of operation of the other core. In some embodiments, PCU 5510 and/or PMIC 5512 performs adaptive or dynamic frequency scaling or adjustment. For example, clock frequency of a processor core can be increased if the core is not operating at its maximum power consumption threshold or limit. In some embodiments, PCU 5510 and/or PMIC 5512 determines the operating condition of each core of a processor, and opportunistically adjusts frequency and/or power supply voltage of that core without the core clocking source (e.g., PLL of that core) losing lock when the PCU 5510 and/or PMIC 5512 determines that the core is operating below a target performance level. For example, if a core is drawing current from a power supply rail less than a total current allocated for that core or processor 5504, then PCU 5510 and/or PMIC 5512 can temporality increase the power draw for that core or processor 5504 (e.g., by increasing clock frequency and/or power supply voltage level) so that the core or processor 5504 can perform at higher performance level. As such, voltage and/or frequency can be increased temporality for processor 5504 without violating product reliability.

In an example, PCU 5510 and/or PMIC 5512 may perform power management operations, e.g., based at least in part on receiving measurements from power measurement circuitries 5542, temperature measurement circuitries 5540, charge level of battery 5518, and/or any other appropriate information that may be used for power management. To that end, PMIC 5512 is communicatively coupled to one or more sensors to sense/detect various values/variations in one or more factors having an effect on power/thermal behavior of the system/platform. Examples of the one or more factors include electrical current, voltage droop, temperature, operating frequency, operating voltage, power consumption, inter-core communication activity, etc. One or more of these sensors may be provided in physical proximity (and/or thermal contact/coupling) with one or more components or logic/IP blocks of a computing system. Additionally, sensor(s) may be directly coupled to PCU 5510 and/or PMIC 5512 in at least one embodiment to allow PCU 5510 and/or PMIC 5512 to manage processor core energy at least in part based on value(s) detected by one or more of the sensors.

Also illustrated is an example software stack of device 5500 (although not all elements of the software stack are illustrated). Merely as an example, processors 5504 may execute application programs 5550, Operating System 5552, one or more Power Management (PM) specific application programs (e.g., generically referred to as PM applications 5558), and/or the like. PM applications 5558 may also be executed by the PCU 5510 and/or PMIC 5512. OS 5552 may also include one or more PM applications 5556a, 5556b, 5556c. The OS 5552 may also include various drivers 5554a, 5554b, 5554c, etc., some of which may be specific for power management purposes. In some embodiments, device 5500 may further comprise a Basic Input/output System (BIOS) 5520. BIOS 5520 may communicate with OS 5552 (e.g., via one or more drivers 5554), communicate with processors 5504, etc.

For example, one or more of PM applications 5558, 5556, drivers 5554, BIOS 5520, etc. may be used to implement power management specific tasks, e.g., to control voltage and/or frequency of various components of device 5500, to control wake-up state, sleep state, and/or any other appropriate power state of various components of device 5500, control battery power usage, charging of the battery 5518, features related to power saving operation, etc.

In some embodiments, battery 5518 is a Li-metal battery with a pressure chamber to allow uniform pressure on a battery. The pressure chamber is supported by metal plates (such as pressure equalization plate) used to give uniform pressure to the battery. The pressure chamber may include pressured gas, elastic material, spring plate, etc. The outer skin of the pressure chamber is free to bow, restrained at its edges by (metal) skin, but still exerts a uniform pressure on the plate that is compressing the battery cell. The pressure chamber gives uniform pressure to battery, which is used to enable high-energy density battery with, for example, 20% more battery life.

In some embodiments, pCode executing on PCU 5510a/b has a capability to enable extra compute and telemetries resources for the runtime support of the pCode. Here pCode refers to a firmware executed by PCU 5510a/b to manage performance of the SoC 5501. For example, pCode may set frequencies and appropriate voltages for the processor. Part of the pCode are accessible via OS 5552. In various embodiments, mechanisms and methods are provided that dynamically change an Energy Performance Preference (EPP) value based on workloads, user behavior, and/or system conditions. There may be a well- defined interface between OS 5552 and the pCode. The interface may allow or facilitate the software configuration of several parameters and/or may provide hints to the pCode. As an example, an EPP parameter may inform a pCode algorithm as to whether performance or battery life is more important.

This support may be done as well by the OS 5552 by including machine-learning support as part of OS 5552 and either tuning the EPP value that the OS hints to the hardware (e.g., various components of SoC 5501) by machine-learning prediction, or by delivering the machine-learning prediction to the pCode in a manner similar to that done by a Dynamic Tuning Technology (DTT) driver. In this model, OS 5552 may have visibility to the same set of telemetries as are available to a DTT. As a result of a DTT machine-learning hint setting, pCode may tune its internal algorithms to achieve optimal power and performance results following the machine-learning prediction of activation type. The pCode as example may increase the responsibility for the processor utilization change to enable fast response for user activity, or may increase the bias for energy saving either by reducing the responsibility for the processor utilization or by saving more power and increasing the performance lost by tuning the energy saving optimization. This approach may facilitate saving more battery life in case the types of activities enabled lose some performance level over what the system can enable. The pCode may include an algorithm for dynamic EPP that may take the two inputs, one from OS 5552 and the other from software such as DTT, and may selectively choose to provide higher performance and/or responsiveness. As part of this method, the pCode may enable in the DTT an option to tune its reaction for the DTT for different types of activity.

In some embodiments, pCode improves the performance of the SoC in battery mode. In some embodiments, pCode allows drastically higher SoC peak power limit levels (and thus higher Turbo performance) in battery mode. In some embodiments, pCode implements power throttling and is part of Intel’s Dynamic Tuning Technology (DTT). In various embodiments, the peak power limit is referred to PL4. However, the embodiments are applicable to other peak power limits. In some embodiments, pCode sets the Vth threshold voltage (the voltage level at which the platform will throttle the SoC) in such a way as to prevent the system from unexpected shutdown (or black screening). In some embodiments, pCode calculates the Psoc,pk SoC Peak Power Limit (e.g., PL4), according to the threshold voltage (Vth). These are two dependent parameters, if one is set, the other can be calculated. pCode is used to optimally set one parameter (Vth) based on the system parameters, and the history of the operation. In some embodiments, pCode provides a scheme to dynamically calculate the throttling level (Psoc, th) based on the available battery power (which changes slowly) and set the SoC throttling peak power (Psoc, th). In some embodiments, pCode decides the frequencies and voltages based on Psoc, th. In this case, throttling events have less negative effect on the SoC performance. Various embodiments provide a scheme which allows maximum performance (Pmax) framework to operate. In some embodiments, VR 5514 includes a current sensor to sense and/or measure current through a high-side switch of VR 5514. In some embodiments the current sensor uses an amplifier with capacitively coupled inputs in feedback to sense the input offset of the amplifier, which can be compensated for during measurement. In some embodiments, the amplifier with capacitively coupled inputs in feedback is used to operate the amplifier in a region where the input common-mode specifications are relaxed, so that the feedback loop gain and/or bandwidth is higher. In some embodiments, the amplifier with capacitively coupled inputs in feedback is used to operate the sensor from the converter input voltage by employing high-PSRR (power supply rejection ratio) regulators to create a local, clean supply voltage, causing less disruption to the power grid in the switch area. In some embodiments, a variant of the design can be used to sample the difference between the input voltage and the controller supply, and recreate that between the drain voltages of the power and replica switches. This allows the sensor to not be exposed to the power supply voltage. In some embodiments, the amplifier with capacitively coupled inputs in feedback is used to compensate for power delivery network related (PDN- related) changes in the input voltage during current sensing.

In some embodiments, the processor includes a fully-unrolled SHA256 datapath featuring a latch-based pipeline design clocked by 3-phase non-overlapping clocks. A scheme to improve throughput (performance) by modulating the clock duty cycle in a deterministic way to reduce the dead time in the latch-based pipeline design to a minimum that is needed in silicon. This DLL in a clock path is used to generate a non-50% duty cycle clock. The extra high phase of the clock increases the time the latch is transparent. With the introduction of the DLL the dead time is kept to just the time a particular part needs to satisfy hold time requirements. Another scheme is described to reduce glitch power where a circuit element (e.g., latch) is introduced to act as a glitch gate. The latch prevents the early toggling signals from propagating. It is timed such that the logic that gets resolved the last passes through the latch.

Examples of various embodiments may include the following:

Example 1 may include an apparatus comprising: a processor core; and a hardware accelerator coupled to the processor core, wherein the hardware accelerator includes: a plurality of latch stages; a plurality of message digest datapath rounds, wherein an individual message digest datapath round is coupled between two latch stages of the plurality of latch stages; and a plurality of clock generation logic circuits, wherein an individual clock generation logic circuit is coupled to an individual latch stage of the plurality of latch stages.

Example 2 may include the apparatus of example 1, or some other example herein, wherein the plurality of clock generation logic circuits is to generate non-overlapping clocks including a first clock, a second, clock, and a third clock.

Example 3 may include the apparatus of example 2, or some other example herein, wherein an individual clock generation logic circuit generates one of the first, second, or third clocks.

Example 4 may include the apparatus of example 2, or some other example herein, wherein an individual clock generation logic circuit receives an enable from another individual clock generation logic circuit of the plurality of clock generation logic circuits.

Example 5 may include the apparatus of example 2, or some other example herein, wherein the individual clock generation logic circuit includes: a sequential unit having a data input and a clock input, and a first output; and an AND gate to receive the first output and the clock input, and to generate a second output which is one of the first clock, second, clock, or third clock.

Example 6 may include the apparatus of example 5, or some other example herein, wherein: the data input is to receive an enable; the clock input is to receive a clock having twice a frequency as that of the first clock, second, clock, or third clock; and the first output is an enable output for another individual clock generation logic circuit.

Example 7 may include the apparatus of example 2, or some other example herein, wherein a plurality of message digest datapath rounds includes a first round and a second round, wherein the plurality of latch stages includes: a first latch clocked by the first clock, wherein an output of the first latch is coupled to an input of the first round; a second latch having an input coupled to an output of the first round, wherein the second latch is clocked by the second clock; and a third latch having an input coupled to an output of the second round, wherein the third latch is clocked by the third clock.

Example 8 may include the apparatus of example 1, or some other example herein, wherein the individual message digest datapath includes a computational block to: precompute a first summation of a 32-bit message (Wi), a 32-bit round constant (Ki), and a content of a first shifted state register (Gi-1); and store a result of the first summation in a state register (Hi). Example 9 may include the apparatus of example 8, or some other example herein, wherein the computational block is further to: compute a compliment of a content of a second shifted state register (Di-1 ); compute a second summation of the compliment, a content of a second state register (Ei), and a computed value; and store a result of the second summation in a state register (Ai).

Example 10 may include the apparatus of example 1, or some other example herein, wherein the hardware accelerator is to mine digital currency.

Example 11 may include the apparatus of example 10, or some other example herein, wherein the digital currency is a Bitcoin.

Example 12 may include the apparatus of example 1, or some other example herein, wherein an enable from each of the plurality of clock generation logic circuits daisy chains among the plurality of clock generation logic circuits.

Example 13 may include an apparatus comprising: a processor core; and a hardware accelerator coupled to the processor core, wherein the hardware accelerator includes a data path where output inversion at a Boolean function and an output inversion at a carry-save adder tree are removed, resulting in inverted outputs at each stage.

Example 14 may include the apparatus of example 13, or some other example herein, wherein the hardware accelerator further includes: a plurality of latch stages; a plurality of message digest datapath rounds, wherein an individual message digest datapath round is coupled between two latch stages of the plurality of latch stages; and a plurality of clock generation logic circuits, wherein an individual latch stage is coupled to an individual latch stage of the plurality.

Example 15 may include the apparatus of example 14, or some other example herein, wherein the plurality of clock generation logic circuits is to generate nonoverlapping clocks including a first clock, a second, clock, and a third clock.

Example 16 may include the apparatus of example 15, or some other example herein, wherein an individual clock generation logic circuit generates one of the first, second, or third clocks.

Example 17 may include the apparatus of example 15, or some other example herein, wherein an individual clock generation logic circuit receives an enable from another individual clock generation logic circuit of the plurality of clock generation logic circuits. Example 18 may include the apparatus of example 15, or some other example herein, wherein the individual clock generation logic circuit includes: a sequential unit having a data input and a clock input, and a first output; and an AND gate to receive the first output and the clock input, and to generate a second output which is one of the first clock, second, clock, or third clock.

Example 19 may include the apparatus of example 13, or some other example herein, wherein the hardware accelerator is to mine digital currency.

Example 20 may include the apparatus of example 19, or some other example herein, wherein the digital currency is Bitcoin.

Example 21 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of examples 1-20, or any other method or process described herein.

Example 22 may include one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples 1-20, or any other method or process described herein.

Example 23 may include an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples 1- 20, or any other method or process described herein.

Example 24 may include a method, technique, or process as described in or related to any of examples 1-20, or portions or parts thereof.

Example 25 may include an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-20, or portions thereof.

Example 26 may include a signal as described in or related to any of examples 1- 20, or portions or parts thereof.

Example 27 may include a signal encoded with data as described in or related to any of examples 1-20, or portions or parts thereof, or otherwise described in the present disclosure.

Example 28 may include an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-20, or portions thereof.

Example 29 may include a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of examples 1-20, or portions thereof.

It will be noted that, the particular features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, a first embodiment may be combined with a second embodiment anywhere the particular features, structures, functions, or characteristics associated with the two embodiments are not mutually exclusive.

While the disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications and variations of such embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. The embodiments of the disclosure are intended to embrace all such alternatives, modifications, and variations as to fall within the broad scope of the appended claims.

In addition, well-known power/ground connections to integrated circuit (IC) chips and other components may or may not be shown within the presented figures, for simplicity of illustration and discussion, and so as not to obscure the disclosure. Further, arrangements may be shown in block diagram form in order to avoid obscuring the disclosure, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the present disclosure is to be implemented (i.e. , such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that the disclosure can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting.

An abstract is provided that will allow the reader to ascertain the nature and gist of the technical disclosure. The abstract is submitted with the understanding that it will not be used to limit the scope or meaning of the claims. The following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment.




 
Previous Patent: MODULAR BATTERY

Next Patent: COMBAT APPLICATIONS PARTNER