Jump to content

User:Pelagic/sandbox/notes/Virtual memory

From Wikipedia, the free encyclopedia

Russinovitch article

[edit]

https://blogs.technet.microsoft.com/markrussinovich/2008/11/17/pushing-the-limits-of-windows-virtual-memory/

Mark's "Pushing the limits of Windows" series is legendary, but there are also some interesting comments there.

Pavel Lebedinsky:

"...physical memory is allocated on demand. Remember that when a process calls VirtualAlloc(MEM_COMMIT) there are no physical pages allocated at this time. Physical pages are only allocated when the app accesses virtual pages for the first time. This is good because it makes committing pages a relatively cheap operation, so apps can commit memory in bigger chunks, without having to worry about each page they may or may not use.
Now, even though committing memory does not allocate physical pages, it still guarantees to the application that reading from/writing to the committed pages will never fail (or deadlock). It might be slow if other physical pages have to be moved to disk in order to make room, but it will eventually succeed.
In order to make that guarantee the memory manager has to assume that every committed page in the system might eventually be written to. And that in turn means that there has to be enough space in the physical memory and all the pagefiles combined to hold all the resulting data. In other words, the total commit charge has to be less than the system commit limit. Once the limit is reached, the memory manager will refuse to commit any more memory, even if there is still plenty of unused (free+zeroed) physical pages, or plenty of unused space in the pagefile.
In a sense, pagefiles are like stormwater ponds. Most of the time they are (almost) empty, but they have to be large enough in case a big storm happens."

Alessandro Angelio:

"But when you sequentially scan a file that’s several GB in size, which takes a while, the system will end up swapping out practically every other process, including most of explorer.exe, and fill up the physical memory with useless disk cache (the same sector is never access twice). After that, using your system requires great patience. Even simply watching a DVD or looking at your holiday pictures makes it hard to multitask, and those are not niche scenarios."

Jamie Hanrahan:

"However this [setting pagefile+RAM to max. observed commit limit] does not leave any room in RAM for code! Or any other mapped files. Or for the operating system’s varous nonpageable allocations. Remember, "commit charge" does not include these.
My recommendation is to set up your maximum workload, then use the very convenient performance monitor counter, Page file / %usage peak.
Your pagefile should be large enough to keep this under 25%, 50% at worst."

Jamie Harahan (later):

"The page I/O rates, also visible in Performance Monitor, will tell you how much paging is happening, but it’s very difficult to tell how much paging is due to low memory conditions and how much is due to the fact that, in a virtual memory OS, paging happens. All code and pre-initialized data is brought in via paging, for example. So are the contents of all data files that are opened without bypassing the file cache."
"Your paging I/O rates do not reflect only the pagefile, because all mapped files (exe’s, dll’s, data files accessed via the file cache as well as through direct file mapping) are read and, if appropriate, written by the pager. If you want to know the page I/O rates to just the pagefile, the only way I know of is to put the pagefile by itself on a partition and then use the partition (logical disk) I/O counters."

"Whatnow" comments on conflicting recommendations re. placing pagefile on SSD.

MS KB 2860880

[edit]

How to determine the appropriate page file size for 64-bit versions of Windows.

"... when a lot of physical memory is installed, a page file might not be required to back the system commit charge during peak usage. The available physical memory alone might be large enough to do this. However, a page file or a dedicated dump file might still be required to back a system crash dump."

Lippert

[edit]

“Out Of Memory” Does Not Refer to Physical Memory

Takes the view that RAM is just a big disk cache, implying that (almost) all memory is backed by disk storage. But then why is the Commit Limit taken to be RAM + pagefile, rather than just pagefile? In the Russinovick article or its discussion, there was mention of allocating memory that is not pagefile-backed.