Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SHARING RESOURCES AMONG ENTITIES
Document Type and Number:
WIPO Patent Application WO/2001/035221
Kind Code:
A1
Abstract:
Systems, devices, structures, and methods are provided to allow resources to be shared among a plurality of processors. An exemplary system includes a mechanism to grant exclusive control of a resource to a processor, while at the same time, the fast memory of such a processor is maintained in a coherent state. An exemplary structure includes data structures that help to identify the portion of the fast memory of the processor to be maintained in a coherent state. An exemplary method includes a determination of past and present processors that have had access to the resource so as to maintain the coherency of the fast memory of the processor.

Inventors:
RUSTAD MARK D
Application Number:
PCT/US2000/030545
Publication Date:
May 17, 2001
Filing Date:
November 03, 2000
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DIGI INT INC (US)
International Classes:
G06F9/46; G06F12/08; G06F12/14; (IPC1-7): G06F9/46; G06F12/08
Foreign References:
US5694575A1997-12-02
US5613153A1997-03-18
EP0563621A21993-10-06
Attorney, Agent or Firm:
Viksnins, Ann S. (Lundberg Woessner & Kluth P.O. Box 2938 Minneapolis, MN, US)
Download PDF:
Claims:
What is claimed is:
1. A system comprising: a bus ; a resource coupled to the bus; and a plurality of entities coupled to the bus, at least one entity among the plurality of entities including a memory, wherein at least a portion of the memory of the at least one entity is selectively reset when the at least one entity has access to the resource.
2. The system of claim 1, wherein the at least one entity is an integrated circuit.
3. The system of claim 1, wherein the resource includes at least a portion of a memory device.
4. The system of claim 1, further comprising a manager to manage at least one request from the plurality of entities to access the resource.
5. The system of claim 1, further comprising an arbiter coupled to the plurality of entities to arbitrate at least one bus request from the plurality of entities.
6. The system of claim 1, wherein the at least a portion of the memory of the at least one entity is not reset when the at least one entity is the same entity that previously had control of the resource.
7. The system of claim 1, wherein the portion of the memory of the at least one entity is selectively reset when the at least one entity is different from an entity that previously had control of the resource.
8. An integrated circuit for allowing at least one resource to be controlled by a processor among a plurality of processors, at least one processor among the plurality of processors including a fast memory, the integrated circuit comprising : a bus ; a central computing unit coupled to the bus; and a switch mechanism, coupled to the central computing unit, to switch the control of the at least one resource, wherein a portion of the fast memory of at least one processor of the plurality of processors is selectively reset when the control of the at least one resource is switched.
9. The integrated circuit of claim 8, wherein the portion of the fast memory of the at least one processor is not reset when the at least one processor is the same processor that previously had control of the at least one resource.
10. The integrated circuit of claim 8, wherein the portion of the fast memory of the at least one processor is selectively reset when the at least one processor is different from a processor that previously had control of the at least one resource.
11. The integrated circuit of claim 8, wherein the switch mechanism is a hardware device.
12. The integrated circuit of claim 8, wherein the switch mechanism is a software switch.
13. The integrated circuit of claim 12, wherein the software switch is a Djikstra primitive.
14. The integrated circuit of claim 8, wherein the at least one resource is a hardware resource.
15. The integrated circuit of claim 14, wherein the hardware resource is a memory.
16. The integrated circuit of claim 8, further comprising a communications channel controller coupled to the bus.
17. The integrated circuit of claim 8, wherein the at least one resource is a software resource.
18. The integrated circuit of claim 17, wherein the software resource is a data structure.
19. The integrated circuit of claim 8, wherein the fast memory is cache memory.
20. An integrated circuit for allowing at least one resource to be shared among a plurality of processors, at least one processor of the plurality of processors including a fast memory, the integrated circuit comprising: a bus; a central computing unit coupled to the bus; and a lock coupled to the central computing unit to reserve exclusive control of the at least one resource to a processor among the plurality of processors, wherein at least a portion of the fast memory of the processor of the plurality of processors is selectively reset when the processor of the plurality of processors has exclusive control of the at least one resource.
21. The integrated circuit of claim 20, wherein the portion of the fast memory of the processor is not reset when the processor is the same processor that previously had exclusive control of the at least one resource.
22. The integrated circuit of claim 20, wherein the portion of the fast memory of the processor is selectively reset when the processor is different from another processor that previously had control of the at least one resource.
23. The integrated circuit of claim 20, wherein the lock is a hardware register.
24. The integrated circuit of claim 20, wherein the lock is a software semaphore.
25. The integrated circuit of claim 24, wherein the software semaphore is a binary semaphore.
26. The integrated circuit of claim 20, further comprising a communications channel controller coupled to the bus.
27. The integrated circuit of claim 20, wherein the fast memory is cache memory.
28. The integrated circuit of claim 27, wherein the cache memory is primary cache memory.
29. The integrated circuit of claim 27, wherein the cache memory is secondary cache memory.
30. A data structure in a machinereadable medium for allowing at least one resource to be shared among a plurality of processors, at least one processor of the plurality of processors including a fast memory, the data structure comprising : a state for indicating that the at least one resource is under control; and a first identifier for identifying a past processor that had exclusive control of the at least one resource.
31. The data structure of claim 30, wherein the data structure is a class, the data structure further comprising an act for resetting at least a portion of the fast memory of the present processor.
32. The data structure of claim 31, further comprising a second identifier for identifying the present processor that has exclusive control of the at least one resource.
33. The data structure of claim 32, further comprising an act for comparing the first identifier and the second identifier, and wherein the act for resetting the fast memory of the present processor being executed when the first identifier is different from the second identifier.
34. The data structure of claim 30, wherein the fast memory is cache memory.
35. The data structure of claim 30, further comprising a data type that is adapted to represent at least one portion of the at least one resource, wherein the data type includes at least one location of the at least one portion of the at least one resource and at least one dimension of the at least one portion of the at least one resource.
36. The data structure of claim 30, further comprising a list that includes at least one location of at least one portion of the at least one resource and at least one dimension of at least one portion of the at least one resource.
37. A method for allowing at least one resource to be shared among a plurality of processors, the method comprising: obtaining exclusive control over the at least one resource by a present processor, the present processor including a fast memory; identifying a past processor to obtain a first identity, wherein the past processor had exclusive control over the at least one resource; and resetting selectively at least a portion of the fast memory of the present processor when the past processor is different from the present processor.
38. The method of claim 37, wherein identifying the present processor further comprises the fast memory as cache memory.
39. The method of claim 38, further comprising identifying a present processor to obtain a second identity, the present processor having exclusive control over the at least one resource, the present processor including a fast memory.
40. The method of claim 39, further comprising comparing the first identity and the second identity so as to determine if the present processor is different from the past processor.
41. The method of claim 37, wherein the progression of the method is in the order presented.
42. A method for scheduling access to at least one resource from among a plurality of processors, the method comprising: obtaining access to the at least one resource from a requesting processor, the requesting processor including a cache memory; excluding access to the at least one resource from the plurality of processors except for the requesting processor; and resetting at least a portion of the cache memory of the requesting processor when the requesting processor is different from a processor that previously had access to the at least one resource.
43. An integrated circuit for allowing at least one resource to be controlled by a processor among a plurality of processors, at least one processor among the plurality of processors including a fast memory, the integrated circuit comprising : a bus; a central computing unit coupled to the bus; a switch mechanism for switching the control of the at least one resource; and a lock, in a cooperative relationship with the switching mechanism, for reserving exclusive control of the at least one resource to a processor among the plurality of processors, wherein at least a portion of the fast memory of the processor of the plurality of processors is selectively reset when the processor of the plurality of processors has exclusive control of the at least one resource.
44. The integrated circuit of claim 43, wherein the fast memory is cache memory.
45. The integrated circuit of claim 43, further comprising a communications channel controller coupled to the bus, wherein the communications channel controller is receptive to diverse communications protocols.
46. The integrated circuit of claim 43, wherein the cooperative relationship of the switch mechanism and the lock maintains cache coherency.
47. The integrated circuit of claim 43, wherein the at least a portion of the fast memory of the at least one processor is not reset when the processor is the same processor that previously had control of the at least one resource.
48. The integrated circuit of claim 43, wherein the portion of the fast memory of the processor is selectively reset when the processor is different from a processor that previously had control of the at least one resource.
49. An integrated circuit for allowing at least one resource to be controlled by a processor among a plurality of processors, at least one processor among the plurality of processors including a fast memory, the integrated circuit comprising : a bus; a central computing unit coupled to the bus; and a scheduler, coupled to the central computing unit, for scheduling the control of the at least one resource, wherein a portion of the fast memory of at least one processor of the plurality of processors is selectively reset when the at least one resource is under control.
50. The integrated circuit of claim 49, wherein the portion of the fast memory of the at least one processor is not reset when the at least one processor is the same processor that previously had control of the at least one resource.
51. The integrated circuit of claim 49, wherein the portion of the fast memory of the at least one processor is selectively reset when the at least one processor is different from a processor that previously had control of the at least one resource.
52. A system comprising: a bus; at least one resource coupled to the bus; a plurality of processors coupled to the bus, at least one processor among the plurality of processors including a fast memory; and a switch mechanism, coupled to the bus, to switch the control of the at least one resource, wherein a portion of the fast memory of at least one processor of the plurality of processors is selectively reset when the control of the at least one resource is switched.
53. The system of claim 52, wherein the portion of the fast memory of the at least one processor is not reset when the at least one processor is the same processor that previously had control of the at least one resource.
54. The system of claim 52, wherein the portion of the fast memory of the at least one processor is selectively reset when the at least one processor is different from a processor that previously had control of the at least one resource.
55. The system of claim 52, wherein the switch mechanism is a hardware device.
56. The system of claim 52, wherein the switch mechanism is a software switch.
57. The system of claim 56, wherein the software switch is a Djikstra primitive.
58. The system of claim 52, wherein the at least one resource is a hardware resource.
59. The system of claim 58, wherein the hardware resource is a memory.
60. The system of claim 52, wherein the at least one processor includes a communications channel controller.
61. The system of claim 52, wherein the at least one resource is a software resource.
62. The system of claim 61, wherein the software resource is a data structure.
63. The system of claim 52, wherein the fast memory is cache memory.
64. A system comprising : a bus; at least one resource coupled to the bus; a plurality of processors coupled to the bus, at least one processor of the plurality of processors including a fast memory; and a lock to reserve exclusive control of the at least one resource to a processor among the plurality of processors, wherein at least a portion of the fast memory of the processor of the plurality of processors is selectively reset when the processor of the plurality of processors has exclusive control of the at least one resource.
65. The system of claim 64, wherein the portion of the fast memory of the processor is not reset when the processor is the same processor that previously had exclusive control of the at least one resource.
66. The system of claim 64, wherein the portion of the fast memory of the processor is selectively reset when the processor is different from another processor that previously had control of the at least one resource.
67. The system of claim 64, wherein the lock is a hardware register.
68. The system of claim 64, wherein the lock is a software semaphore.
69. The system of claim 68, wherein the software semaphore is a binary semaphore.
70. The system of claim 64, further comprising a communications channel controller coupled to the bus.
71. The system of claim 64, wherein the fast memory is cache memory.
72. The system of claim 71, wherein the cache memory is primary cache memory.
73. The system of claim 71, wherein the cache memory is secondary cache memory.
74. A system comprising: a bus; at least one resource coupled to the bus; a plurality of processors coupled to the bus, at least one processor among the plurality of processors including a fast memory; a switch mechanism to switch the control of the at least one resource; and a lock, in a cooperative relationship with the switching mechanism, to reserve exclusive control of the at least one resource to a processor among the plurality of processors, wherein at least a portion of the fast memory of the processor of the plurality of processors is selectively reset when the processor of the plurality of processors has exclusive control of the at least one resource.
75. The system of claim 74, wherein the fast memory is cache memory.
76. The system of claim 74, wherein the at least one processor includes a communications channel controller, wherein the communications channel controller is receptive to diverse communications protocols.
77. The system of claim 74, wherein the cooperative relationship of the switch mechanism and the lock maintains cache coherency.
78. The system of claim 74, wherein the at least a portion of the fast memory of the at least one processor is not reset when the processor is the same processor that previously had control of the at least one resource.
79. The system of claim 74, wherein the portion of the fast memory of the processor is selectively reset when the processor is different from a processor that previously had control of the at least one resource.
Description:
SHARING RESOURCES AMONG ENTITIES TechnicalField The present invention relates generally to computer systems. More particularly, it pertains to sharing resources among a plurality of entities in computer systems.

The information technology of today has grown at an unprecedented rate as a result of the synergistic marriage of communication networks and the computer. Milestones in the development of these communication networks have included the telephone networks, radio, television, cable, and communication satellites. Computers have made tremendous progress from being a single, hulking machine operated by a human operator to today's postage-stamp-size integrated circuits. The merging of the communication networks and the computer has replaced the model of forcing workers to bring their work to the machine to a model of allowing anyone to access information on any computers at diverse locations and times.

Certain barriers exist for the continuing advancement of communication networks. Communication networks have leveraged from the powerful processing capability of a single computer processor. To increase processing throughput, multiple processors may be engaged in a parallel architecture.

Whereas a single processor may access resources for processing in an orderly manner, each processor in a multiple-processor environment competes with the others for access to resources to complete its own processing workload. In this environment, a resource can be changed or altered by any of the processors.

Such changes by a processor, thus, could adversely affect the operation of other processors that are not privy to the change made by the controlling processor.

Thus, systems, devices, structures, and methods are needed to allow resources to be shared in a multiple-processor environment.

The above-mentioned problems with sharing resources in a multiple- processor environment as well as other problems are addressed by the present invention and will be understood by reading and studying the following specification. Systems, devices, structures, and methods are described which allow resources to be shared in a multiple-processor environment.

In particular, an illustrative embodiment includes an exemplary system.

This system includes a bus and a number of entities connected to the bus. At least one entity among the number of entities includes a memory. The system further includes a resource. At least a portion of the memory of one entity is selectively reset when the entity has access to the resource. For example, the memory of one entity is not reset if the entity is the same entity that previously controlled the shared resource.

Another illustrative embodiment includes an exemplary data structure in a machine-readable medium for allowing at least one resource to be shared in a multiple-processor environment, each processor in the multiple-processor environment including a fast memory. The data structure comprises a state for indicating that the resource is under control, and an identifier for identifying a past processor that had exclusive control of the resource.

A further illustrative embodiment includes an exemplary method for synchronizing access to at least one resource in a multiple-processor environment. The method comprises obtaining access to the at least one resource from a requesting processor, the requesting processor including a cache memory; excluding access to the resource except for the requesting processor; and resetting at least a portion of the cache memory of the requesting processor.

These and other embodiments, aspects, advantages, and features of the present invention will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art by reference to the following description of the invention and referenced drawings or by practice of the invention. The aspects, advantages, and features of the invention are realized and attained by means of the instrumentalities, procedures, and combinations particularly pointed out in the appended claims.

Bnef Description of the Drawmgs Figure 1 is a block diagram illustrating a system in accordance with one embodiment.

Figure 2 is a block diagram illustrating a system in accordance with one embodiment.

Figure 3 is a block diagram illustrating a system in accordance with one embodiment.

Figure 4 is a block diagram illustrating a system in accordance with one embodiment.

Figure 5 is a block diagram illustrating a data structure in accordance with one embodiment.

Figure 6 is a flow diagram illustrating a method for allowing resources to be shared in accordance with one embodiment.

Detailed Description In the following detailed description of the invention, reference is made to the accompanying drawings that form a part hereof, and in which are shown, by way of illustration, specific embodiments in which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention.

Figure 1 is a block diagram illustrating a system in accordance with one embodiment. The system 100 includes a communication medium 102. In one embodiment, the communication medium 102 is a bus. In another embodiment, the communication medium 102 is a network.

This communication medium 102 allows data, address and controls to be communicated among manager 112, resource 116, and entities 1040, 1041, 1042, ..., and 104N. In one embodiment, at least one entity among entities 1040, 104in 1042,..., and 104N is an integrated circuit. In one embodiment, these entities 1040, 104p 1042,..., and 104N may be optionally connected to an arbiter 118 through a connection medium 114 ; the arbiter 118 arbitrates bus requests.

Each entity 1040, 1041, 1042,..., and 104N independently may need to access resource 116 to accomplish its workload. To access resource 116, each entity 1040, 1041 1042,..., and 104N obtains authorization from the manager 112.

Manager 112 decides which particular entity has the authorization to access the resource 116. In one embodiment, once the entity that has access to the resource 116 has accomplished its task involving the resource 116, the entity notifies the manager 112 to free up the resource 116 for other entities to use. In another embodiment, the manager 112 determines if the entity that has access to resource 116 no longer needs to use the resource 116; in this case, the manager 112 frees up the resource 116 and makes it available for other entities to access.

In one embodiment, the decision to grant authorization to the resource 116 is based on an algorithm. In another embodiment, the decision to grant authorization to the resource 116 is based on the priority of the workload. In another embodiment, the decision to grant authorization to the resource 116 is based upon the earliest request by an entity to access resource 116. In yet another embodiment, the decision to grant authorization to the resource 116 is done in a round-robin fashion.

The manager 112, in one embodiment, is a software application such as a resource scheduler. In another embodiment, the manager 112 is an integrated circuit.

Figure 2 is a block diagram illustrating a system in accordance with one embodiment. Integrated circuit 2000 includes an internal bus 202. The internal bus 202 allows data and controls to be routed to and from central computing unit 204, and port controller 208.

The central computing unit 204 can access fast memory 2220. In one embodiment, fast memory 2220 is primary cache memory.

The port controller (or communications channel controller) 208 is receptive to communication channels 2060, 206X, 2062,... 206N. In one embodiment at least one of these communication channels supports an asynchronous protocol. In another embodiment at least one of these communication channels supports a synchronous protocol. In another embodiment at least one of these communication channels can support either an asynchronous or a synchronous protocol. In another embodiment at least one of

these communication channels supports an Asynchronous Transfer Mode (ATM) protocol. In another embodiment at least one of these communication channels supports an asymmetric digital subscriber line (ADSL) ATM protocol. In another embodiment at least one of these communication channels supports a High Level Data Link Control (HDLC) protocol. In yet another embodiment at least one of these communication channels supports transparent mode protocol.

In a further embodiment, at least each of the cited protocols is controllable contemporaneously.

The port controller 208 manages data from communication channels 2060, 2061, 2062,... 206N before the data is made available to the rest of integrated circuit 2000 for further processing. The port controller 208 communicates the data from communication channels 2060, 2061, 2062,... 206N through the internal bus 202. The port controller 208, in one embodiment, includes local area network (LAN) support. In another embodiment, the port controller 208 includes metropolitan area network (MAN) support. In yet another embodiment, the port controller 208 includes wide area network (WAN) support. In a further embodiment, the port controller 208 includes Internet support.

Interface 210 coordinates data and controls from the integrated circuit 2000 to the bus 214. When the integrated circuit 2000 requires access to a resource outside of the integrated circuit 2000, the integrated circuit 2000 communicates with the interface 210 to establish access. When the integrated circuit 2000 is providing data to a requesting client outside of the integrated circuit 2000, the integrated circuit 2000 communicates with the interface 210 to push the data to the requesting client. For illustrative purposes, the requesting client may be processor 2001.

The bus 214 allows data and controls to be routed to and from integrated circuit 2000, a resource 216, a processor 2001, and a processor 2002. In one embodiment, the resource 216 is a random access memory, such as synchronous dynamic random access memory (SDRAM). In another embodiment, the resource 216 is a memory device, such as a hard disk. In another embodiment, the resource 216 is a modifiable data source containing a data structure 218. In

yet another embodiment, the resource 216 is a writable CD-ROM. In a further embodiment, the resource 216 is a computer, such as a server.

In one embodiment, the processor 2001 includes the architecture of the integrated circuit 2000. The processor 2001 includes a primary fast memory 222,.

The primary fast memory 222, stores computer instructions and data before they are loaded into the processor 2001 for processing. Processor 200, accesses the primary fast memory 222, for instructions and data that are needed repeatedly for program execution. The time for access to instructions and data in the primary fast memory 222, is shorter in comparison to the instructions and data store in secondary fast memory 224 and main memory 2260. In one embodiment, the primary fast memory 2221 includes cache memory.

Processor 2002 may be similar to processor 200, described above.

Processor 2002 also contains primary fast memory 2222. In one embodiment, processor 2002 does not have secondary fast memory; instead, processor 2002 is coupled directly to the main memory 2261.

The switching mechanism 212 indicates whether the resource 216 is available for use. In one embodiment, the switching mechanism may be a component of the integrated circuit 2000 ; in this embodiment, the switching mechanism is coupled to the central computing unit 204. In another embodiment, the switch mechanism may be a part of the arbiter 118 of figure 1.

For illustrative purposes, suppose the processor 200, needs to access the resource 216 to use a portion of the resource 216. In one embodiment, the resource 216 may be a memory device. In one embodiment, the portion of resource 216 is the data structure 218. The processor 200, communicates with the switching mechanism 212 through the bus 214. The processor 2001 requests the switching mechanism 212 to have access to the resource 216. If the switching mechanism 212 determines that the resource 216 is available for access, it switches control of the resource 216 to the processor 2001. While the processor 200, has access to the resource 216, the switching mechanism 212 denies access to other processors that request access to the resource 216, such as processor 2002.

At least a portion of the fast memory 222, may be reset when the processor 200, has access to the resource 216. For illustrative purposes only,

suppose the processor 200, uses the data structure 218 repeatedly, the fast memory 222, stores a copy of at least a portion of the data structure 218. This will enable the processor 2001 to spend more of its time processing since attempting to access the data structure 218 through bus 214 and resource 216 is slower than accessing a copy of the data structure through the fast memory 2221.

Once the processor 2001 no longer needs to use the data structure 218, it informs the switching mechanism 212.

For further illustrative purposes, suppose the processor 2002 requests access to the resource 216 to use the data structure 218. Since the resource 216 is available, the processor 2002 obtains access to the data structure 218. A portion of the primary fast memory 2222 of processor 2002 is selectively reset. In one embodiment, it is understood that selectively resetting means to reset every time. Yet, in another embodiment, it is understood that selectively resetting means to reset if a certain condition is satisfied; one condition, for example, may include instances where a different processor than processor 2002 had exclusive use of the resource 216. Processor 2002 then makes changes to the data structure 218. Once the processor 2002 no longer needs the data structure 218, it informs the switching mechanism 212.

Next, for further illustrative purposes, suppose the processor 2001 again requests access to the resource 216 to use the data structure 218. Processor 200, is granted access since no other processor is using the data structure 218.

Processor 200, then proceeds to access the data structure 218. But it has a copy of at least a portion of the data structure 218 in its primary fast memory 222, already. However, this copy of the portion of the data structure 218 may not be the same as the portion of the data structure 218 in resource 216. The portion of the data structure 218 in resource 216 may have been changed previously by processor 2002.

In one embodiment, at least a portion of the fast memory 2221 is reset so that the processor 2001, instead of using the copy of the portion of the data structure 218 in its primary fast memory 222,, would have to again access the portion of the data structure 218 from resource 216. In another embodiment, all of the fast memory 222, is reset. In one embodiment, the fast memory 222, is

reset if another processor had access to the fast memory 2221 since the last time processor 2001 had access to the fast memory 222,.

In one embodiment, the switching mechanism 212 is a hardware device, such as a register. In another embodiment, the switching mechanism 212 is a software switch. In another embodiment, the switching mechanism 212 is a Djikstra primitive. In yet another embodiment, the switching mechanism 212 may reset at least a portion of the fast memory 222, upon granting access. In a further embodiment, the switching mechanism 212 may reset all of the fast memory 222, upon granting access.

Figure 3 is a block diagram illustrating a system in accordance with one embodiment. Figure 3 contains similar elements of figure 2 except that figure 3 includes an operating system 328 and a lock 330. The description of similar elements in figure 2 is incorporated here in figure 3. The operating system 328 is executed on the central computing unit 304. In one embodiment, the operating system 328 includes the lock 330. In another embodiment, the lock 330 exists outside the operating system 328 or outside of the integrated circuit 3000.

The lock 330 secures the resource 316 for the exclusive use of a processor. In an exemplary embodiment, the processor 3001 needs to access the resource 316 to use a portion of the data structure 318. The processor 3001 requests the lock 330 for exclusive access to the resource 316. If the lock 330 has not secured the resource 316 for another entity to use, it locks the resource 316 to the exclusive use of the processor 300,. Other processors that request to use the resource 316, such as processor 3°°2, wait until the resource 316 is again made available by the lock 330.

At least a portion of the fast memory 3221 may be reset when the processor 300, has exclusive access to the resource 316. For illustrative purposes only, suppose the processor 300, uses the data structure 318 repeatedly, the fast memory 322, stores a copy of at least a portion of the data structure 318 to reduce bandwidth usage and memory latency. Once the processor 300, no longer needs to use the data structure 318, it informs the lock 330 to unlock the resource 316 for other entities to use.

For further illustrative purposes, suppose the processor 3°°2 requests exclusive access to the resource 316. Since the resource 316 is available, the

processor 3002 obtains a lock to the data structure 318 for its exclusive use. A portion of the primary fast memory 3222 of processor 3002 is selectively reset. In one embodiment, the portion of the memory is reset every time. In another embodiment, the portion of the memory is reset if a certain condition is satisfied; one condition, for example, may include instances where a different processor than processor 3°°2 had exclusive use of the resource 316. Processor 3002 then makes changes to the data structure 318. Once the processor 3°°2 no longer needs the data structure 318, it informs the lock 330 to unlock the resource 316.

Next, for further illustrative purposes, suppose the processor 300, again requests exclusive access to the resource 316 to use a portion of the data structure 318. Processor 3001 obtains the lock since no other processor is using the data structure 318. Processor 300, then proceeds to access the portion of the data structure 318. But it has a copy of at least a portion of the data structure 318 in its primary fast memory 322, already. However, this copy of the portion of the data structure 318 is not the same as the portion of the data structure 318 in resource 316. The data structure 318 in resource 316 may have been changed previously by processor 3002. ion one embodiment, at least a portion of the fast memory 322, is reset so that the processor 300,, instead of using the copy of the data structure 318 in its primary fast memory 3221, would have to again access the data structure 318 from resource 316. In another embodiment, all of the fast memory 3221 is reset. The fast memory 322, is reset if another processor had access to the fast memory 3221 since the last time processor 300, had access to the fast memory 322,.

In one embodiment, the lock 330 is a hardware register. In another embodiment, the lock 330 is a software semaphore. In another embodiment, the lock 330 is a binary semaphore. In another embodiment, the lock 330 is a counting semaphore.

Figure 4 is a block diagram illustrating a system in accordance with one embodiment. Figure 4 contains similar elements of figure 3 and figure 2. The description of similar elements is incorporated here in full. Figure 4 contains together the switching mechanism 412 and the lock 430. The switching mechanism 412 may operate differently than as described heretofore.

In an illustrative embodiment, suppose the processor 400, needs to use a portion of the data structure 418. The processor 400, requests the switching mechanism 412 to grant control of the resource 416. If the switching mechanism 412 has not granted control to another processor, the switching mechanism 412 switches control to the processor 4001. While the processor 4001 has control, the switching mechanism 412 denies access to other processors that request similar control, such as processor 4002. Once the processor 4001 obtains control of the resource 416, it communicates with the switching mechanism 412 that it has control; the switching mechanism then again allow other processors to request control to the resource 416. Thus, the function of the switching mechanism 412 can be likened to a global switch.

After eliminating contention access from other processors, the processor 400, verifies with the lock 430 to determine whether the data structure 418 has actually been locked. In one embodiment, the lock 430 may reside within the data structure 418.

If the data structure 418 has not been locked, the processor 400, obtains the lock. At least a portion of the fast memory 422, may be reset when the processor 400, has access to the data structure 418. For illustrative purposes, suppose the processor 400, uses the data structure 418 repeatedly, the fast memory 4221 stores a copy of at least a portion of the data structure 418. From then on, the processor 400, uses the copy of the data structure 418 unless changes are made to the data structure 418. Once the processor 400, no longer needs to use the data structure 418, it informs the lock 430 and releases the lock 430 on the data structure 418.

For further illustrative purposes, suppose the processor 4002 requests control of the data structure 418 while the data structure 418 is locked by processor 4001. Since the switching mechanism 412 has not allocated that control, the processor 4°°2 obtains control. The processor 4°°2 proceeds to lock the data structure 418. However, since the data structure 418 has already been locked by processor 400,, the attempt by the processor 4°°2 to lock the data structure 418 is denied. The processor 4°°2 then releases control to the switching mechanism 412, other processors then attempt to grab control, and the processor 4°°2 waits for its chance to gain control again.

For further illustrative purposes, suppose subsequently that the processor 4°°2 obtains control and gains access to the data structure 418 when the processor 4001 releases the lock on the data structure 418. A portion of the primary fast memory 4222 of processor 4°°2 is selectively reset. In one embodiment, the memory is reset every time. In another embodiment, the memory is reset when a certain condition is satisfied; one condition, for example, may include instances where a different processor than processor 4002 had exclusive use of the resource 416. Processor 4°°2 then makes changes to the data structure 418. Once the processor 4002 no longer needs the data structure 418, it releases the lock on the data structure 418.

Next, for further illustrative purposes, suppose the processor 400, again requests access to the data structure 418 through the above-described process.

Processor 400, is granted access since no other processor is using the data structure 418. Processor 4001 then proceeds to access the data structure 418.

But it has a copy of at least a portion of the data structure 418 in its primary fast memory 4221 already. However, this copy of the data structure 418 is not the same as the data structure 418 in resource 416. The data structure 418 in resource 416 may have been modified previously by processor 4002. In one embodiment, at least a portion of the fast memory 4221 may be reset so that the processor 400,, instead of using the copy of the data structure 418 in its primary fast memory 422,, would have to again access the data structure 418 from resource 416. In another embodiment, all of the fast memory 422, may be reset.

In one embodiment, the lock 430 is a data structure. The fast memory 422, is only reset if another processor had access to memory since the last time processor 400, had access to the fast memory 422,.

Figure 5 is a block diagram illustrating a data structure in accordance with one embodiment. Data structure 500 is used to schedule accesses to resource 516. The data structure 500 includes several data variables. The data variable"lock state"502 contains information about whether the resource 516 has been locked or not. The data variable"last user id"504 contains information to identify the last processor that accessed the resource 516. The data variable "present user id"506 contains information to identify the current processor that accesses the resource 516. The data variable"resource"524 contains at least a

location and a dimension of a portion of resource 516 where such is being accessed. In one embodiment, data variable"resource"524 is a pointer to a list 526 containing at least a location and at least a dimension of a portion of resource 516 where such is being accessed. The data variable"resource location" 508 contains the address of a portion of the resource 516. The data variable "resource dimension"510 contains the size of a portion of the resource 516. In one embodiment, the list 526 may be implemented as an array data structure. In another embodiment, the list 526 may be implemented as a linked list.

In one embodiment, the data variables"resource location"508 and "resource dimension"510 are indicative of the area of the resource that the data structure 500 protects. When control of the access to the resource 516 changes hands from one processor to another, the processor having present access resets the portion of the fast memory that relates to the area indicated by the data variables"resource location"508 and"resource dimension"510. In another embodiment, the data variables"resource location"508 and"resource dimension"510 are not used; instead, all portions of the fast memory are reset except for the portion storing stack data.

In one embodiment, the data structure 500 is a class. In that embodiment, the data structure 500 further includes a method"resetting"512. This method is used to reset the fast memory of a processor that has access to the resource 516.

This method inhibits cache incoherence so that the processor does not inadvertently use a copy of old data.

In the embodiment where the data structure 500 is a class, the data structure 500 further includes a method"comparing"514. This method compares the data variables"last user id"504 and"present user id"506. If the data variables"last user id"504 and"present user id"506 are the same, then the method"resetting"512 is not executed. If, however, the data variables"last user id"504 and"present user id"506 are different, then the method"resetting"512 may be executed.

Processors 5180, 5181, 5182,..., and 518N use the data structure 500 for orderly access to resource 516. The data structure 500 ensures that only one processor among processors 5180,518"5182,..., and 518N may access the resource 516 at any one time. Processors 5180,5181, and 518N have primary fast

memory 5200, 520,, and 520N, respectively. In one embodiment, the data structure 500 is responsible for cache coherency by resetting at least a portion of the primary fast memory of the processor currently accessing the resource 516.

In another embodiment, the processors 5180,5181, and 518N are responsible for resetting at least a portion of the primary fast memory 5200, 5201, and 520N, respectively, to ensure cache coherency.

In another embodiment, the processor 5182 does not have primary fast memory. In this case, neither the data structure 500 nor the processor 5182 needs to reset any primary fast memory.

In another embodiment, the processor 518N has not only the primary fast memory 520N but also secondary fast memory 522. In one embodiment, the data structure 500 would be responsible for resetting at least a portion of the primary fast memory 520N and also at least a portion of the secondary fast memory 522 to ensure cache coherency. In another embodiment, the processor 520N ensures cache coherency by resetting at least a portion of the primary fast memory 520N and at least a portion of the secondary fast memory 522.

Figure 6 is a flow diagram illustrating a method for allowing resources to be shared in accordance with one embodiment. In the present embodiment, the entity that requests access to the resource to use it in some way can be a user, a processor, or a software client. For explanatory purposes, the processor will be used to describe the following embodiment.

A processor requests access to a resource to do some work. In one embodiment, such work may entail reading from the resource to obtain certain information. In another embodiment, such work may entail writing to the resource to store certain information. In another embodiment, such work may entail both reading and writing. In another embodiment, such work may be to execute certain processes on the resource. In another embodiment, such work may be to control the resource.

The processor begins by checking at block 600 to see if the resource is available. If the resource is not available, the processor then waits at block 604, and subsequently retries to gain access the resource at block 600. If the resource is available, the processor attempts to gain control of the resource at block 602.

Obtaining control may include locking the resource for exclusive access and inhibiting others from contending for access.

Next, at block 604, the identity of the last processor that had accessed the resource is obtained. Then, at block 606, the identity of the present processor that has accessed the resource is obtained. At block 608, the identity of the last processor and the identity of the present processor are compared. If the identities are the same, the method goes to block 612. Otherwise, if the identities are different, block 610 resets at least a portion of the cache memory of the processor.

At block 612, the present processor accesses the resource. Once the present processor has finished using the resource, it releases the resource so that other processors may obtain access to it.

Conclusion Thus, systems, devices, structures, and methods have been described to share resources among a plurality of processors. The described embodiments allow resources to be shared without the use of complex bus snooping and cache invalidation hardware. Because this hardware is also expensive, the described embodiments benefit from cost reduction. The present embodiments also enjoy an integrated solution on one chip with a small footprint because it does not use the complicated bus architecture of the bus snooping and cache invalidation hardware.

Although the specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement which is calculated to achieve the same purpose may be substituted for the specific embodiment shown. This application is intended to cover any adaptations or variations of the present invention. It is to be understood that the above description is intended to be illustrative and not restrictive. Combinations of the above embodiments and other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention includes any other applications in which the above structures and fabrication methods are used. Accordingly, the scope of the invention should only be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.