summaryrefslogtreecommitdiffstats
path: root/Documentation/vm/page_migration
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/vm/page_migration')
-rw-r--r--Documentation/vm/page_migration175
1 files changed, 175 insertions, 0 deletions
diff --git a/Documentation/vm/page_migration b/Documentation/vm/page_migration
new file mode 100644
index 000000000000..0dd4ef30c361
--- /dev/null
+++ b/Documentation/vm/page_migration
@@ -0,0 +1,175 @@
+Page migration
+--------------
+
+Page migration allows the moving of the physical location of pages between
+nodes in a numa system while the process is running. This means that the
+virtual addresses that the process sees do not change. However, the
+system rearranges the physical location of those pages.
+
+The main intend of page migration is to reduce the latency of memory access
+by moving pages near to the processor where the process accessing that memory
+is running.
+
+Page migration allows a process to manually relocate the node on which its
+pages are located through the MF_MOVE and MF_MOVE_ALL options while setting
+a new memory policy via mbind(). The pages of process can also be relocated
+from another process using the sys_migrate_pages() function call. The
+migrate_pages function call takes two sets of nodes and moves pages of a
+process that are located on the from nodes to the destination nodes.
+Page migration functions are provided by the numactl package by Andi Kleen
+(a version later than 0.9.3 is required. Get it from
+ftp://ftp.suse.com/pub/people/ak). numactl provided libnuma which
+provides an interface similar to other numa functionality for page migration.
+cat /proc/<pid>/numa_maps allows an easy review of where the pages of
+a process are located. See also the numa_maps manpage in the numactl package.
+
+Manual migration is useful if for example the scheduler has relocated
+a process to a processor on a distant node. A batch scheduler or an
+administrator may detect the situation and move the pages of the process
+nearer to the new processor. At some point in the future we may have
+some mechanism in the scheduler that will automatically move the pages.
+
+Larger installations usually partition the system using cpusets into
+sections of nodes. Paul Jackson has equipped cpusets with the ability to
+move pages when a task is moved to another cpuset (See ../cpusets.txt).
+Cpusets allows the automation of process locality. If a task is moved to
+a new cpuset then also all its pages are moved with it so that the
+performance of the process does not sink dramatically. Also the pages
+of processes in a cpuset are moved if the allowed memory nodes of a
+cpuset are changed.
+
+Page migration allows the preservation of the relative location of pages
+within a group of nodes for all migration techniques which will preserve a
+particular memory allocation pattern generated even after migrating a
+process. This is necessary in order to preserve the memory latencies.
+Processes will run with similar performance after migration.
+
+Page migration occurs in several steps. First a high level
+description for those trying to use migrate_pages() from the kernel
+(for userspace usage see the Andi Kleen's numactl package mentioned above)
+and then a low level description of how the low level details work.
+
+A. In kernel use of migrate_pages()
+-----------------------------------
+
+1. Remove pages from the LRU.
+
+ Lists of pages to be migrated are generated by scanning over
+ pages and moving them into lists. This is done by
+ calling isolate_lru_page().
+ Calling isolate_lru_page increases the references to the page
+ so that it cannot vanish while the page migration occurs.
+ It also prevents the swapper or other scans to encounter
+ the page.
+
+2. Generate a list of newly allocates page. These pages will contain the
+ contents of the pages from the first list after page migration is
+ complete.
+
+3. The migrate_pages() function is called which attempts
+ to do the migration. It returns the moved pages in the
+ list specified as the third parameter and the failed
+ migrations in the fourth parameter. The first parameter
+ will contain the pages that could still be retried.
+
+4. The leftover pages of various types are returned
+ to the LRU using putback_to_lru_pages() or otherwise
+ disposed of. The pages will still have the refcount as
+ increased by isolate_lru_pages() if putback_to_lru_pages() is not
+ used! The kernel may want to handle the various cases of failures in
+ different ways.
+
+B. How migrate_pages() works
+----------------------------
+
+migrate_pages() does several passes over its list of pages. A page is moved
+if all references to a page are removable at the time. The page has
+already been removed from the LRU via isolate_lru_page() and the refcount
+is increased so that the page cannot be freed while page migration occurs.
+
+Steps:
+
+1. Lock the page to be migrated
+
+2. Insure that writeback is complete.
+
+3. Make sure that the page has assigned swap cache entry if
+ it is an anonyous page. The swap cache reference is necessary
+ to preserve the information contain in the page table maps while
+ page migration occurs.
+
+4. Prep the new page that we want to move to. It is locked
+ and set to not being uptodate so that all accesses to the new
+ page immediately lock while the move is in progress.
+
+5. All the page table references to the page are either dropped (file
+ backed pages) or converted to swap references (anonymous pages).
+ This should decrease the reference count.
+
+6. The radix tree lock is taken. This will cause all processes trying
+ to reestablish a pte to block on the radix tree spinlock.
+
+7. The refcount of the page is examined and we back out if references remain
+ otherwise we know that we are the only one referencing this page.
+
+8. The radix tree is checked and if it does not contain the pointer to this
+ page then we back out because someone else modified the mapping first.
+
+9. The mapping is checked. If the mapping is gone then a truncate action may
+ be in progress and we back out.
+
+10. The new page is prepped with some settings from the old page so that
+ accesses to the new page will be discovered to have the correct settings.
+
+11. The radix tree is changed to point to the new page.
+
+12. The reference count of the old page is dropped because the radix tree
+ reference is gone.
+
+13. The radix tree lock is dropped. With that lookups become possible again
+ and other processes will move from spinning on the tree lock to sleeping on
+ the locked new page.
+
+14. The page contents are copied to the new page.
+
+15. The remaining page flags are copied to the new page.
+
+16. The old page flags are cleared to indicate that the page does
+ not use any information anymore.
+
+17. Queued up writeback on the new page is triggered.
+
+18. If swap pte's were generated for the page then replace them with real
+ ptes. This will reenable access for processes not blocked by the page lock.
+
+19. The page locks are dropped from the old and new page.
+ Processes waiting on the page lock can continue.
+
+20. The new page is moved to the LRU and can be scanned by the swapper
+ etc again.
+
+TODO list
+---------
+
+- Page migration requires the use of swap handles to preserve the
+ information of the anonymous page table entries. This means that swap
+ space is reserved but never used. The maximum number of swap handles used
+ is determined by CHUNK_SIZE (see mm/mempolicy.c) per ongoing migration.
+ Reservation of pages could be avoided by having a special type of swap
+ handle that does not require swap space and that would only track the page
+ references. Something like that was proposed by Marcelo Tosatti in the
+ past (search for migration cache on lkml or linux-mm@kvack.org).
+
+- Page migration unmaps ptes for file backed pages and requires page
+ faults to reestablish these ptes. This could be optimized by somehow
+ recording the references before migration and then reestablish them later.
+ However, there are several locking challenges that have to be overcome
+ before this is possible.
+
+- Page migration generates read ptes for anonymous pages. Dirty page
+ faults are required to make the pages writable again. It may be possible
+ to generate a pte marked dirty if it is known that the page is dirty and
+ that this process has the only reference to that page.
+
+Christoph Lameter, March 8, 2006.
+
OpenPOWER on IntegriCloud