Searched refs:folios (Results 1 – 8 of 8) sorted by relevance
97 struct folio *folios[PAGEVEC_SIZE]; member103 offsetof(struct folio_batch, folios));140 fbatch->folios[fbatch->nr++] = folio; in folio_batch_add()
71 if (xa_is_value(fbatch->folios[j])) in truncate_folio_batch_exceptionals()84 struct folio *folio = fbatch->folios[i]; in truncate_folio_batch_exceptionals()88 fbatch->folios[j++] = folio; in truncate_folio_batch_exceptionals()369 truncate_cleanup_folio(fbatch.folios[i]); in truncate_inode_pages_range()372 folio_unlock(fbatch.folios[i]); in truncate_inode_pages_range()415 struct folio *folio = fbatch.folios[i]; in truncate_inode_pages_range()515 struct folio *folio = fbatch.folios[i]; in invalidate_mapping_pagevec()646 struct folio *folio = fbatch.folios[i]; in invalidate_inode_pages2_range()
280 XA_STATE(xas, &mapping->i_pages, fbatch->folios[0]->index); in page_cache_delete_batch()300 if (folio != fbatch->folios[i]) { in page_cache_delete_batch()302 fbatch->folios[i]->index, folio); in page_cache_delete_batch()329 struct folio *folio = fbatch->folios[i]; in delete_from_page_cache_batch()341 filemap_free_folio(mapping, fbatch->folios[i]); in delete_from_page_cache_batch()2608 folio = fbatch->folios[folio_batch_count(fbatch) - 1]; in filemap_get_pages()2716 fbatch.folios[0])) in filemap_read()2717 folio_mark_accessed(fbatch.folios[0]); in filemap_read()2720 struct folio *folio = fbatch.folios[i]; in filemap_read()2752 folio_put(fbatch.folios[i]); in filemap_read()
1082 struct folio *folio = fbatch->folios[i]; in folio_batch_remove_exceptionals()1084 fbatch->folios[j++] = folio; in folio_batch_remove_exceptionals()
938 folio = fbatch.folios[i]; in shmem_undo_range()1000 folio = fbatch.folios[i]; in shmem_undo_range()1211 struct folio *folio = fbatch->folios[i]; in shmem_unuse_swap_entries()
104 * Handle folios that span multiple pages.109 don't match folio sizes or folio alignments and that may cross folios.363 it transferred. The filesystem also should not deal with setting folios367 Note that the helpers have the folios locked, but not pinned. It is391 [Optional] This is called after the folios in the request have all been438 * Once the data is read, the folios that have been fully read/cleared:446 * Any folios that need writing to the cache will then have DIO writes issued.450 * Writes to the cache will proceed asynchronously and the folios will have the
623 on dirty pages, and ->release_folio on clean folios with the private876 release_folio is called on folios with private data to tell the886 some or all folios in an address_space. This can happen891 and needs to be certain that all folios are invalidated, then939 some filesystems have more complex state (unstable folios in NFS
300 ->readahead() unlocks the folios that I/O is attempted on like ->read_folio().