Home
last modified time | relevance | path

Searched refs:vmemmap (Results 1 – 25 of 35) sorted by relevance

12

/linux-5.19.10/Documentation/translations/zh_CN/vm/
Dmemory-model.rst89 "sparse vmemmap"。选择是在构建时进行的,它由 `CONFIG_SPARSEMEM_VMEMMAP` 的
96 作。有一个全局的 `struct page *vmemmap` 指针,指向一个虚拟连续的 `struct page`
97 对象阵列。PFN是该数组的一个索引,`struct page` 从 `vmemmap` 的偏移量是该页的PFN。
99 为了使用vmemmap,一个架构必须保留一个虚拟地址的范围,以映射包含内存映射的物理页,并
100 确保 `vmemmap`指向该范围。此外,架构应该实现 :c:func:`vmemmap_populate` 方法,
101 它将分配物理内存并为虚拟内存映射创建页表。如果一个架构对vmemmap映射没有任何特殊要求,
/linux-5.19.10/Documentation/vm/
Dvmemmap_dedup.rst4 A vmemmap diet for HugeTLB and Device DAX
158 vmemmap pages and restore the previous mapping relationship.
161 We also can use this approach to free (PAGE_SIZE - 1) vmemmap pages.
172 Notice: The head vmemmap page is not freed to the buddy allocator and all
173 tail vmemmap pages are mapped to the head vmemmap page frame. So we can see
182 in the previous chapter, except when used with the vmemmap in
193 There's no remapping of vmemmap given that device-dax memory is not part of
196 the head vmemmap page representing, whereas device-dax reuses the tail
197 vmemmap page. This results in only half of the savings compared to HugeTLB.
Dmemory-model.rst107 vmemmap". The selection is made at build time and it is determined by
114 The sparse vmemmap uses a virtually mapped memory map to optimize
116 page *vmemmap` pointer that points to a virtually contiguous array of
118 offset of the `struct page` from `vmemmap` is the PFN of that
121 To use vmemmap, an architecture has to reserve a range of virtual
123 map and make sure that `vmemmap` points to that range. In addition,
127 requirements for the vmemmap mappings, it can use default
/linux-5.19.10/include/asm-generic/
Dmemory_model.h25 #define __pfn_to_page(pfn) (vmemmap + (pfn))
26 #define __page_to_pfn(page) (unsigned long)((page) - vmemmap)
/linux-5.19.10/arch/powerpc/mm/
Dpgtable_64.c91 struct page *vmemmap; variable
92 EXPORT_SYMBOL(vmemmap);
Dinit_64.c76 unsigned long offset = vmemmap_addr - ((unsigned long)(vmemmap)); in vmemmap_subsection_start()
/linux-5.19.10/tools/testing/selftests/vm/
D.gitignore5 hugepage-vmemmap
Drun_vmtests.sh105 run_test ./hugepage-vmemmap
DMakefile37 TEST_GEN_FILES += hugepage-vmemmap
/linux-5.19.10/Documentation/riscv/
Dvm-layout.rst52 ffffffc700000000 | -228 GB | ffffffc7ffffffff | 4 GB | vmemmap
88 ffff8d8000000000 | -114.5 TB | ffff8f7fffffffff | 2 TB | vmemmap
/linux-5.19.10/Documentation/admin-guide/kdump/
Dvmcoreinfo.rst503 VMEMMAP_START ~ VMEMMAP_END-1 : vmemmap region, used for struct page array.
550 The vmemmap_list maintains the entire vmemmap physical mapping. Used
551 to get vmemmap list count and populated vmemmap regions info. If the
552 vmemmap address translation information is stored in the crash kernel,
553 it is used to translate vmemmap kernel virtual addresses.
570 The vmemmap virtual address space management does not have a traditional
576 when computing the count of vmemmap regions.
/linux-5.19.10/arch/s390/mm/
Ddump_pagetables.c281 address_markers[VMEMMAP_NR].start_address = (unsigned long) vmemmap; in pt_dump_init()
282 address_markers[VMEMMAP_END_NR].start_address = (unsigned long)vmemmap + vmemmap_size; in pt_dump_init()
/linux-5.19.10/arch/s390/boot/
Dstartup.c21 struct page *__bootdata_preserved(vmemmap);
201 vmemmap = (struct page *)vmemmap_start; in setup_kernel_memory_layout()
/linux-5.19.10/Documentation/translations/zh_CN/riscv/
Dvm-layout.rst56 ffffffc700000000 | -228 GB | ffffffc7ffffffff | 4 GB | vmemmap
/linux-5.19.10/Documentation/arm64/
Dmemory.rst43 fffffc0000000000 fffffdffffffffff 2TB vmemmap
61 fffffc0000000000 ffffffdfffffffff ~4TB vmemmap
124 offset and vmemmap offsets are computed at early boot to enable
/linux-5.19.10/arch/x86/include/asm/
Dpgtable_64.h258 #define vmemmap ((struct page *)VMEMMAP_START) macro
/linux-5.19.10/fs/
DKconfig262 bool "Default optimizing vmemmap pages of HugeTLB to on"
266 When using HUGETLB_PAGE_OPTIMIZE_VMEMMAP, the optimizing unused vmemmap
268 to enable optimizing vmemmap pages of HugeTLB by default. It can then
/linux-5.19.10/mm/
DMakefile83 obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
/linux-5.19.10/arch/powerpc/include/asm/nohash/64/
Dpgtable.h78 #define vmemmap ((struct page *)VMEMMAP_BASE) macro
/linux-5.19.10/arch/s390/kernel/
Dsetup.c166 struct page *vmemmap; variable
167 EXPORT_SYMBOL(vmemmap);
/linux-5.19.10/Documentation/translations/zh_CN/dev-tools/
Dkasan.rst286 小区域)。对于所有其他区域 —— 例如vmalloc和vmemmap空间 —— 一个只读页面被映射
/linux-5.19.10/Documentation/admin-guide/sysctl/
Dvm.rst573 Enable (set to 1) or disable (set to 0) the feature of optimizing vmemmap pages
576 Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
580 to the buddy allocator, the vmemmap pages representing that range needs to be
581 remapped again and the vmemmap pages discarded earlier need to be rellocated
590 pool to the buddy allocator since the allocation of vmemmap pages could be
593 Once disabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
/linux-5.19.10/arch/ia64/include/asm/
Dpgtable.h228 # define vmemmap ((struct page *)VMALLOC_END) macro
/linux-5.19.10/arch/powerpc/kernel/
Dsetup-common.c833 pr_info("vmemmap start = 0x%lx\n", (unsigned long)vmemmap); in print_system_info()
/linux-5.19.10/Documentation/admin-guide/mm/
Dhugetlbpage.rst63 Note: When the feature of freeing unused vmemmap pages associated
87 Note: When the feature of freeing unused vmemmap pages associated with each
168 unused vmemmap pages associated with each HugeTLB page.

12