Jump to content

Thrashing (computer science)

From Wikipedia, the free encyclopedia
(Redirected from Cache thrashing)

In computer science, thrashing occurs in a system with virtual memory when a computer's real storage resources are overcommitted, leading to a constant state of paging and page faults, slowing most application-level processing.[1] This causes the performance of the computer to degrade or even collapse. The situation can continue indefinitely until the user closes some running applications or the active processes free up additional virtual memory resources.

After initialization, most programs operate on a small number of code and data pages compared to the total memory the program requires. The pages most frequently accessed at any point are called the working set, which may change over time.

When the working set is not significantly greater than the system's total number of real storage page frames, virtual memory systems work most efficiently, and an insignificant amount of computing is spent resolving page faults. As the total of the working sets grows, resolving page faults remains manageable until the growth reaches a critical point at which the number of faults increases dramatically and the time spent resolving them overwhelms the time spent on the computing the program was written to do. This condition is referred to as thrashing. Thrashing may occur on a program that randomly accesses huge data structures, as its large working set causes continual page faults that drastically slow down the system. Satisfying page faults may require freeing pages that will soon have to be re-read from disk.

The term is also used for various similar phenomena, particularly movement between other levels of the memory hierarchy, wherein a process progresses slowly because significant time is being spent acquiring resources.

"Thrashing" is also used in contexts other than virtual memory systems –for example, to describe cache issues in computing, or silly window syndrome in networking.

Overview

[edit]

Virtual memory works by treating a portion of secondary storage such as a computer hard disk as an additional layer of the cache hierarchy. Virtual memory allows processes to use more memory than is physically present in main memory. Operating systems supporting virtual memory assign processes a virtual address space and each process refers to addresses in its execution context by a so-called virtual address. To access data such as code or variables at that address, the process must translate the address to a physical address in a process known as virtual address translation. In effect, physical main memory becomes a cache for virtual memory, which is in general stored on disk in memory pages.

Programs are allocated a certain number of pages as needed by the operating system. Active memory pages exist in both RAM and on disk. Inactive pages are removed from the cache and written to disk when the main memory becomes full.

If processes are utilizing all main memory and need additional memory pages, a cascade of severe cache misses known as page faults will occur, often leading to a noticeable lag in the operating system responsiveness. This process together with the futile, repetitive page swapping that occurs is known as "thrashing". This frequently leads to high, runaway CPU utilization that can grind the system to a halt. In modern computers, thrashing may occur in the paging system (if there is not sufficient physical memory or the disk access time is overly long), or in the I/O communications subsystem (especially in conflicts over internal bus access), etc.

Depending on the configuration and algorithms involved, the throughput and latency of a system may degrade by multiple orders of magnitude. Thrashing is when the CPU performs 'productive' work less and 'swapping' work more. The overall memory access time may increase since the higher level memory is only as fast as the next lower level in the memory hierarchy.[2] The CPU is busy swapping pages so much that it cannot respond to users' programs and interrupts as much as required. Thrashing occurs when there are too many pages in memory, and each page refers to another page. Real memory reduces its capacity to contain all the pages, so it uses 'virtual memory'. When each page in execution demands that page that is not currently in real memory (RAM) it places some pages on virtual memory and adjusts the required page on RAM. If the CPU is too busy doing this task, thrashing occurs.

Causes

[edit]

In virtual memory systems, thrashing may be caused by programs or workloads that present insufficient locality of reference: if the working set of a program or a workload cannot be effectively held within physical memory, then constant data swapping, i.e., thrashing, may occur. The term was first used during the tape operating system days to describe the sound the tapes made when data was being rapidly written to and read. A worst case might occur on VAX processors. A single MOVL crossing a page boundary could have a source operand using a displacement deferred addressing mode, where the longword containing the operand address crosses a page boundary, and a destination operand using a displacement deferred addressing mode, where the longword containing the operand address crosses a page boundary, and the source and destination could both cross page boundaries. This single instruction references ten pages; if not all are in RAM, each will cause a page fault. The total number of pages thus involved in this particular instruction is ten, and all ten pages must be simultaneously present in memory. If any one of the ten pages cannot be swapped in (for example to make room for any of the other pages), the instruction will fault, and every attempt to restart it will fail until all ten pages can be swapped in.

A system thrashing is often a result of a sudden spike in page demand from a small number of running programs. Swap-token[3] is a lightweight and dynamic thrashing protection mechanism. The basic idea is to set a token in the system, which is randomly given to a process that has page faults when thrashing happens. The process that has the token is given a privilege to allocate more physical memory pages to build its working set, which is expected to quickly finish its execution and release the memory pages to other processes. A timestamp is used to hand over the tokens one by one. The first version of swap-token is implemented in Linux. The second version is called preempt swap-token. In this updated swap-token implementation, a priority counter is set for each process to track the number of swap-out pages. The token is always given to the process with a high priority, which has a high number of swap-out pages. The length of the time stamp is not a constant but is determined by the priority: the higher the number of swap-out pages of a process, the longer the time stamp for it will be.

Other uses

[edit]

Thrashing is best known in the context of memory and storage, but analogous phenomena occur for other resources, including:

Cache thrashing
Where main memory is accessed in a pattern that leads to multiple main memory locations competing for the same cache lines, resulting in excessive cache misses. This is most likely to be problematic for caches with associativity.
TLB thrashing
Where the translation lookaside buffer (TLB) acting as a cache for the memory management unit (MMU) which translates virtual addresses to physical addresses is too small for the working set of pages. TLB thrashing can occur even if instruction cache or data cache thrashing is not occurring because these are cached in different sizes. Instructions and data are cached in small blocks (cache lines), not entire pages, but address lookup is done at the page level. Thus even if the code and data working sets fit into the cache, if the working sets are fragmented across many pages, the virtual address working set may not fit into TLB, causing TLB thrashing.
Heap thrashing
Frequent garbage collection, due to failure to allocate memory for an object, due to insufficient free memory or insufficient contiguous free memory due to memory fragmentation is referred to as heap thrashing.[4]
Process thrashing
A similar phenomenon occurs for processes: when the process working set cannot be coscheduled, i.e. such that not all interacting processes are scheduled to run at the same time, they experience "process thrashing" due to being repeatedly scheduled and unscheduled, progressing only slowly.[5]

See also

[edit]
  • Page replacement algorithm – Algorithm for virtual memory implementation
  • Congestion collapse – Reduced quality of service due to high network traffic
  • Resource contention – In computing, a conflict over access to a shared resource
  • Out of memory – State of computer operation where no additional memory can be allocated
  • Software aging – Tendency of software to fail due to external changes or prolonged operation

References

[edit]
  1. ^ Denning, Peter J. (1968). "Thrashing: Its causes and prevention" (PDF). Proceedings AFIPS, Fall Joint Computer Conference. 33: 915–922. Retrieved 2012-02-15.
  2. ^ L., Hennessy, John (2012). Computer architecture: a quantitative approach. Patterson, David A., Asanović, Krste. (5th ed.). Waltham, MA: Morgan Kaufmann. ISBN 9780123838728. OCLC 755102367.{{cite book}}: CS1 maint: multiple names: authors list (link)
  3. ^ Song Jiang, and Xiaodong Zhang (2005). "Token-ordered LRU: an effective page replacement policy and its implementation in Linux systems". Performance Evaluation. pp. 5–29. doi:10.1016/j.peva.2004.10.002.
  4. ^ Performance Optimization and Tuning Techniques for IBM Processors, including IBM POWER8, "heap+thrashing" p. 170
  5. ^ Ousterhout, J. K. (1982). "Scheduling Techniques for Concurrent Systems" (PDF). Proceedings of Third International Conference on Distributed Computing Systems. pp. 22–30.