Jump to content

User:Avadlam3/sandbox

From Wikipedia, the free encyclopedia

Multi-level Cache properties

[edit]
Cache organization with L1 cache as Separate and L2 cache as Unified

Separate Versus Unified

A separate cache is in which the cache is divided into instruction cache and data cache. In contrast, unified cache contains both the instructions and data combined in the same cache. During a process, the upper level cache is accessed to get the instructions to the processor in each cycle. The cache will also be accessed to get data. Requiring both actions to be implemented at the same requires multiple ports in the cache and as well as takes more access time. Having multiple ports requires additional hardware and wiring leading to huge structure. Therefore the L1 cache is organized as a separate cache which results in less ports, less hardware and low access time.

The lower level caches L2 and L3 are accessed only when there is a miss in the L1 cache implies the frequency of access to the lower level caches is less compared to the L1 cache. Therefore unified organization is implemented in the lower level caches as having a single port will suffice.

Inclusion policies

Inclusive cache organization

Whether a block present in the upper cache layer can be present in the lower cache level is governed by the inclusion policies below:

  • Inclusive
  • Exclusive

In the Inclusive policy all the blocks present in the upper level cache has to be present in the lower level cache as well. Each upper level cache component is a subset of the lower level cache component. In this case since there is duplication of blocks there is some wastage of memory. However checking is better in case of inclusive because if the lower level cache doesn't have the block then we can be sure that the upper level cache can no way have that block.

In the exclusive policy all the cache hierarchy components are completely exclusive which implies that any element in the upper level cache will not be present in any of the lower cache component. This enables complete usage of the cache memory as no same block is present in the other cache component. However there is a high memory access latency.

Write Policies

There are two policies which define the way in which a modified cache block will be updated in the main memory:

  • Write Through
  • Write Back.

In the case of Write Through policy whenever the value of the cache block changes, it is further modified in the lower level memory hierarchy as well. This policy ensures that the data is stored safely as it is written throughout the hierarchy.

However in case of the Write Back policy the changed cache block will be updated in the lower level hierarchy only when the cache block is evicted. Writing back every cache block which is evicted is not efficient. Therefore, we use the concept of a Dirty bit attached to each cache block. The dirty bit is made high whenever the cache block is modified and during eviction only the blocks with dirty bit high will be written to the lower level hierarchy and then the dirty bit is cleared. In this policy there is data losing risk as the only valid copy is stored in the cache and therefore need some correction techniques to be implemented.

In case of a write where the byte is not present in the cache block the write policies below determine whether the byte has to be brought to the cache or not:

  • Write Allocate
  • Write No-Allocate

Write Allocate policy states that when the block is not found in the cache, it is fetched from the main memory and placed in the cache before writing. In the Write No-Allocate policy, if the block is missed in the cache it will just write in the lower level memory hierarchy without fetching the block into the cache.

The common combinations of the policies are Write Back Write Allocate and Write Through Write No-Allocate.

Cache organization with L1 as private and L2 and L3 as shared

Shared Versus Private

In the multicore processors the organization of the cache to be shared or private impacts the performance of the processor. A private cache is private to that particular core and cannot be accessed by the other cores. Since each core has its own private cache there might be duplicate blocks in the cache which leads to reduced capacity utilization. However, this organization leads to a lower cache hit latency.

A shared cache is where it is shared among multiple cores and therefore can be directly accessed by any of the cores. Since it is shared each block in the cache is unique and therefore has more hit rate as there will be no duplicate blocks. However, the cache hit latency is larger as multiple cores try to access the same cache.

In practice, the upper level cache L1 is implemented as private and lower level caches are implemented as shared.

See also

[edit]
  • m

References

[edit]
  1. cache