/linux-6.6.21/Documentation/block/ |
D | stat.rst | 29 read I/Os requests number of read I/Os processed 30 read merges requests number of read I/Os merged with in-queue I/O 32 read ticks milliseconds total wait time for read requests 33 write I/Os requests number of write I/Os processed 34 write merges requests number of write I/Os merged with in-queue I/O 36 write ticks milliseconds total wait time for write requests 37 in_flight requests number of I/Os currently in flight 39 time_in_queue milliseconds total wait time for all requests 40 discard I/Os requests number of discard I/Os processed 41 discard merges requests number of discard I/Os merged with in-queue I/O [all …]
|
D | blk-mq.rst | 9 through queueing and submitting IO requests to block devices simultaneously, 23 involves ordering read/write requests according to the current position of the 32 The former design had a single queue to store block IO requests with a single 45 for instance), blk-mq takes action: it will store and manage IO requests to 53 layer or if we want to try to merge requests. In both cases, requests will be 56 Then, after the requests are processed by software queues, they will be placed 58 to process those requests. However, if the hardware does not have enough 59 resources to accept more requests, blk-mq will places requests on a temporary 65 The block IO subsystem adds requests in the software staging queues 73 The staging queue can be used to merge requests for adjacent sectors. For [all …]
|
D | writeback_cache_control.rst | 17 a forced cache flush, and the Force Unit Access (FUA) flag for requests. 26 guarantees that previously completed write requests are on non-volatile 58 on non-empty bios can simply be ignored, and REQ_PREFLUSH requests without 68 support required, the block layer completes empty REQ_PREFLUSH requests before 70 requests that have a payload. For devices with volatile write caches the 76 and handle empty REQ_OP_FLUSH requests in its prep_fn/request_fn. Note that 77 REQ_PREFLUSH requests with a payload are automatically turned into a sequence 84 and the driver must handle write requests that have the REQ_FUA bit set
|
/linux-6.6.21/Documentation/devicetree/bindings/dma/ |
D | lpc1850-dmamux.txt | 11 - dma-requests: Number of DMA requests for the mux 15 - dma-requests: Number of DMA requests the controller can handle 28 dma-requests = <16>; 40 dma-requests = <64>;
|
D | fsl-imx-dma.txt | 18 - dma-requests : Number of DMA requests supported. 19 - #dma-requests : deprecated 34 Clients have to specify the DMA requests with phandles in a list. 40 - dma-names: List of string identifiers for the DMA requests. For the correct
|
D | ti-dma-crossbar.txt | 9 - dma-requests: Number of DMA requests the crossbar can receive 13 - dma-requests: Number of DMA requests the controller can handle 43 dma-requests = <127>; 51 dma-requests = <205>;
|
D | renesas,rzn1-dmamux.yaml | 34 dma-requests: 39 - dma-requests 50 dma-requests = <32>;
|
D | dma-router.yaml | 18 have more peripherals integrated with DMA requests than what the DMA 33 dma-requests: 49 dma-requests = <205>;
|
D | owl-dma.yaml | 42 dma-requests: 59 - dma-requests 76 dma-requests = <46>;
|
/linux-6.6.21/Documentation/virt/acrn/ |
D | io-request.rst | 14 For each User VM, there is a shared 4-KByte memory region used for I/O requests 26 An I/O client is responsible for handling User VM I/O requests whose accessed 29 default client, that handles all I/O requests that do not fit into the range of 33 Below illustration shows the relationship between I/O requests shared buffer, 34 I/O requests and I/O clients. 84 4. Processing flow of I/O requests 91 c. The upcall handler schedules a worker to dispatch I/O requests. 92 d. The worker looks for the PENDING I/O requests, assigns them to different 95 e. The notified client handles the assigned I/O requests. 96 f. The HSM updates I/O requests states to COMPLETE and notifies the hypervisor
|
/linux-6.6.21/drivers/gpu/drm/i915/gt/uc/ |
D | intel_guc_ct.c | 117 spin_lock_init(&ct->requests.lock); in intel_guc_ct_init_early() 118 INIT_LIST_HEAD(&ct->requests.pending); in intel_guc_ct_init_early() 119 INIT_LIST_HEAD(&ct->requests.incoming); in intel_guc_ct_init_early() 123 INIT_WORK(&ct->requests.worker, ct_incoming_request_worker_func); in intel_guc_ct_init_early() 382 unsigned int lost = fence % ARRAY_SIZE(ct->requests.lost_and_found); in ct_track_lost_and_found() 390 ct->requests.lost_and_found[lost].stack = stack_depot_save(entries, n, GFP_NOWAIT); in ct_track_lost_and_found() 392 ct->requests.lost_and_found[lost].fence = fence; in ct_track_lost_and_found() 393 ct->requests.lost_and_found[lost].action = action; in ct_track_lost_and_found() 400 return ++ct->requests.last_fence; in ct_get_next_fence() 740 spin_lock(&ct->requests.lock); in ct_send() [all …]
|
/linux-6.6.21/drivers/gpu/drm/i915/gt/ |
D | intel_gt_requests.c | 21 list_for_each_entry_safe(rq, rn, &tl->requests, link) in retire_requests() 31 return !list_empty(&engine->kernel_context->timeline->requests); in engine_active() 208 container_of(work, typeof(*gt), requests.retire_work.work); in retire_work_handler() 210 queue_delayed_work(gt->i915->unordered_wq, >->requests.retire_work, in retire_work_handler() 217 INIT_DELAYED_WORK(>->requests.retire_work, retire_work_handler); in intel_gt_init_requests() 222 cancel_delayed_work(>->requests.retire_work); in intel_gt_park_requests() 227 queue_delayed_work(gt->i915->unordered_wq, >->requests.retire_work, in intel_gt_unpark_requests() 234 cancel_delayed_work_sync(>->requests.retire_work); in intel_gt_fini_requests()
|
/linux-6.6.21/Documentation/virt/kvm/ |
D | vcpu-requests.rst | 14 /* Check if any requests are pending for VCPU @vcpu. */ 40 as possible after making the request. This means most requests 69 ensure VCPU requests are seen by VCPUs (see "Ensuring Requests Are Seen"), 90 certain VCPU requests, namely KVM_REQ_TLB_FLUSH, to wait until the VCPU 96 VCPU requests are simply bit indices of the ``vcpu->requests`` bitmap. 100 clear_bit(KVM_REQ_UNBLOCK & KVM_REQUEST_MASK, &vcpu->requests); 104 independent requests; all additional bits are available for architecture 105 dependent requests. 142 VCPU requests should be masked by KVM_REQUEST_MASK before using them with 152 This flag is applied to requests that only need immediate attention [all …]
|
/linux-6.6.21/Documentation/filesystems/ |
D | virtiofs.rst | 58 Since the virtio-fs device uses the FUSE protocol for file system requests, the 64 FUSE requests are placed into a virtqueue and processed by the host. The 71 prioritize certain requests over others. Virtqueues have queue semantics and 72 it is not possible to change the order of requests that have been enqueued. 74 impossible to add high priority requests. In order to address this difference, 75 the virtio-fs device uses a "hiprio" virtqueue specifically for requests that 76 have priority over normal requests.
|
D | gfs2-glocks.rst | 19 The gl_holders list contains all the queued lock requests (not 77 grant for which we ignore remote demote requests. This is in order to 163 1. DLM lock time (non-blocking requests) 164 2. DLM lock time (blocking requests) 169 currently means any requests when (a) the current state of 173 lock requests. 176 how many lock requests have been made, and thus how much data 180 of dlm lock requests issued. 198 the average time between lock requests for a glock means we 225 srtt Smoothed round trip time for non blocking dlm requests [all …]
|
/linux-6.6.21/arch/powerpc/kvm/ |
D | trace.h | 106 __field( __u32, requests ) 111 __entry->requests = vcpu->requests; 115 __entry->cpu_nr, __entry->requests)
|
/linux-6.6.21/Documentation/ABI/stable/ |
D | sysfs-bus-xen-backend | 39 Number of flush requests from the frontend. 46 Number of requests delayed because the backend was too 47 busy processing previous requests. 54 Number of read requests from the frontend. 68 Number of write requests from the frontend.
|
/linux-6.6.21/Documentation/ABI/testing/ |
D | sysfs-class-scsi_tape | 33 The number of I/O requests issued to the tape drive other 34 than SCSI read/write requests. 54 Shows the total number of read requests issued to the tape 65 read I/O requests to complete. 85 Shows the total number of write requests issued to the tape 96 write I/O requests to complete.
|
/linux-6.6.21/Documentation/admin-guide/device-mapper/ |
D | log-writes.rst | 10 that is in the WRITE requests is copied into the log to make the replay happen 17 cache. This means that normal WRITE requests are not actually logged until the 22 This works by attaching all WRITE requests to a list once the write completes. 39 Any REQ_FUA requests bypass this flushing mechanism and are logged as soon as 40 they complete as those requests will obviously bypass the device cache. 42 Any REQ_OP_DISCARD requests are treated like WRITE requests. Otherwise we would 43 have all the DISCARD requests, and then the WRITE requests and then the FLUSH
|
/linux-6.6.21/Documentation/scsi/ |
D | hptiop.rst | 110 All queued requests are handled via inbound/outbound queue port. 125 - Post the packet to IOP by writing it to inbound queue. For requests 127 requests allocated in host memory, write (0x80000000|(bus_addr>>5)) 134 For requests allocated in IOP memory, the request offset is posted to 137 For requests allocated in host memory, (0x80000000|(bus_addr>>5)) 144 For requests allocated in IOP memory, the host driver free the request 147 Non-queued requests (reset/flush etc) can be sent via inbound message 155 All queued requests are handled via inbound/outbound list. 169 round to 0 if the index reaches the supported count of requests. 186 Non-queued requests (reset communication/reset/flush etc) can be sent via PCIe
|
/linux-6.6.21/Documentation/mm/ |
D | balance.rst | 14 allocation requests that have order-0 fallback options. In such cases, 17 __GFP_IO allocation requests are made to prevent file system deadlocks. 19 In the absence of non sleepable allocation requests, it seems detrimental 24 That being said, the kernel should try to fulfill requests for direct 26 the dma pool, so as to keep the dma pool filled for dma requests (atomic 29 regular memory requests by allocating one from the dma pool, instead 74 probably because all allocation requests are coming from intr context 88 watermark[WMARK_HIGH]. When low_on_memory is set, page allocation requests will 97 1. Dynamic experience should influence balancing: number of failed requests
|
/linux-6.6.21/Documentation/driver-api/firmware/ |
D | request_firmware.rst | 12 Synchronous firmware requests 15 Synchronous firmware requests will wait until the firmware is found or until 43 Asynchronous firmware requests 46 Asynchronous firmware requests allow driver code to not have to wait
|
/linux-6.6.21/Documentation/hid/ |
D | hid-transport.rst | 105 - Control Channel (ctrl): The ctrl channel is used for synchronous requests and 108 events or answers to host requests on this channel. 112 SET_REPORT requests. 120 requiring explicit requests. Devices can choose to send data continuously or 123 to device and may include LED requests, rumble requests or more. Output 131 Feature reports are never sent without requests. A host must explicitly set 142 channel provides synchronous GET/SET_REPORT requests. Plain reports are only 150 simultaneous GET_REPORT requests. 159 GET_REPORT requests can be sent for any of the 3 report types and shall 173 multiple synchronous SET_REPORT requests. [all …]
|
/linux-6.6.21/drivers/media/v4l2-core/ |
D | v4l2-ctrls-request.c | 21 INIT_LIST_HEAD(&hdl->requests); in v4l2_ctrl_handler_init_request() 39 if (hdl->req_obj.ops || list_empty(&hdl->requests)) in v4l2_ctrl_handler_free_request() 47 list_for_each_entry_safe(req, next_req, &hdl->requests, requests) { in v4l2_ctrl_handler_free_request() 102 list_del_init(&hdl->requests); in v4l2_ctrl_request_unbind() 163 list_add_tail(&hdl->requests, &from->requests); in v4l2_ctrl_request_bind()
|
/linux-6.6.21/net/handshake/ |
D | netlink.c | 199 LIST_HEAD(requests); in handshake_net_exit() 208 list_splice_init(&requests, &hn->hn_requests); in handshake_net_exit() 211 while (!list_empty(&requests)) { in handshake_net_exit() 212 req = list_first_entry(&requests, struct handshake_req, hr_list); in handshake_net_exit()
|