/linux-6.1.9/Documentation/hwmon/ |
D | ibmpowernv.rst | 18 'hwmon' populates the 'sysfs' tree having attribute files, each for a given 21 All the nodes in the DT appear under "/ibm,opal/sensors" and each valid node in 45 each OCC. Using this attribute each OCC can be asked to 58 each OCC. Using this attribute each OCC can be asked to 69 each OCC. Using this attribute each OCC can be asked to 80 each OCC. Using this attribute each OCC can be asked to
|
/linux-6.1.9/drivers/net/ethernet/qlogic/qlcnic/ |
D | qlcnic_dcb.c | 570 struct qlcnic_dcb_param *each; in qlcnic_83xx_dcb_query_cee_param() local 597 each = &mbx_out.type[j]; in qlcnic_83xx_dcb_query_cee_param() 599 each->hdr_prio_pfc_map[0] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 600 each->hdr_prio_pfc_map[1] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 601 each->prio_pg_map[0] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 602 each->prio_pg_map[1] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 603 each->pg_bw_map[0] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 604 each->pg_bw_map[1] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 605 each->pg_tsa_map[0] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 606 each->pg_tsa_map[1] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() [all …]
|
/linux-6.1.9/tools/testing/selftests/firmware/ |
D | settings | 2 # 2 seconds). There are 3 test configs, each done with and without firmware 3 # present, each with 2 "nowait" functions tested 5 times. Expected time for a 5 # Additionally, fw_fallback may take 5 seconds for internal timeouts in each 7 # 10 seconds for each testing config: 120 + 15 + 30
|
/linux-6.1.9/tools/perf/tests/shell/ |
D | stat_bpf_counters_cgrp.sh | 15 if ! perf stat -a --bpf-counters --for-each-cgroup / true > /dev/null 2>&1; then 18 perf --no-pager stat -a --bpf-counters --for-each-cgroup / true || true 53 …output=$(perf stat -a --bpf-counters --for-each-cgroup ${test_cgroups} -e cpu-clock -x, sleep 1 2… 67 …output=$(perf stat -C 1 --bpf-counters --for-each-cgroup ${test_cgroups} -e cpu-clock -x, taskset …
|
/linux-6.1.9/scripts/ |
D | find-unused-docs.sh | 44 for each in "${files_included[@]}"; do 45 FILES_INCLUDED[$each]="$each"
|
/linux-6.1.9/Documentation/devicetree/bindings/phy/ |
D | apm-xgene-phy.txt | 19 Two set of 3-tuple setting for each (up to 3) 25 Two set of 3-tuple setting for each (up to 3) 28 gain control. Two set of 3-tuple setting for each 32 each (up to 3) supported link speed on the host. 36 3-tuple setting for each (up to 3) supported link 40 3-tuple setting for each (up to 3) supported link 46 - apm,tx-speed : Tx operating speed. One set of 3-tuple for each
|
D | phy-tegra194-p2u.yaml | 14 Speed) each interfacing with 12 and 8 P2U instances respectively. 16 each interfacing with 8, 8 and 8 P2U instances respectively. 29 description: Should be the physical address space and length of respective each P2U instance.
|
/linux-6.1.9/Documentation/devicetree/bindings/gpio/ |
D | gpio-max3191x.txt | 18 - maxim,modesel-gpios: GPIO pins to configure modesel of each chip. 20 (if each chip is driven by a separate pin) or 1 22 - maxim,fault-gpios: GPIO pins to read fault of each chip. 25 - maxim,db0-gpios: GPIO pins to configure debounce of each chip. 28 - maxim,db1-gpios: GPIO pins to configure debounce of each chip.
|
/linux-6.1.9/Documentation/filesystems/nfs/ |
D | pnfs.rst | 6 reference multiple devices, each of which can reference multiple data servers. 20 We reference the header for the inode pointing to it, across each 22 LAYOUTCOMMIT), and for each lseg held within. 34 nfs4_deviceid_cache). The cache itself is referenced across each 36 the lifetime of each lseg referencing them. 66 layout types: "files", "objects", "blocks", and "flexfiles". For each
|
/linux-6.1.9/Documentation/devicetree/bindings/pinctrl/ |
D | pinctrl-bindings.txt | 9 designated client devices. Again, each client device must be represented as a 16 device is inactive. Hence, each client device can define a set of named 35 For each client device individually, every pin state is assigned an integer 36 ID. These numbers start at 0, and are contiguous. For each state ID, a unique 47 pinctrl-0: List of phandles, each pointing at a pin configuration 52 from multiple nodes for a single pin controller, each 65 pinctrl-1: List of phandles, each pointing at a pin configuration 68 pinctrl-n: List of phandles, each pointing at a pin configuration
|
/linux-6.1.9/Documentation/mm/damon/ |
D | design.rst | 96 Below four sections describe each of the DAMON core mechanisms and the five 108 access to each page per ``sampling interval`` and aggregates the results. In 109 other words, counts the number of the accesses to each page. After each 135 one page in the region is required to be checked. Thus, for each ``sampling 136 interval``, DAMON randomly picks one page in each region, waits for one 153 adaptively merges and splits each region based on their access frequency. 155 For each ``aggregation interval``, it compares the access frequencies of 157 after it reports and clears the aggregated access frequency of each region, it 158 splits each region into two or three regions if the total number of regions 175 abstracted monitoring target memory area only for each of a user-specified time
|
/linux-6.1.9/Documentation/userspace-api/media/v4l/ |
D | ext-ctrls-detect.rst | 37 - The image is divided into a grid, each cell with its own motion 41 - The image is divided into a grid, each cell with its own region 55 Sets the motion detection thresholds for each cell in the grid. To 61 Sets the motion detection region value for each cell in the grid. To
|
/linux-6.1.9/Documentation/bpf/ |
D | map_cgroup_storage.rst | 127 per-CPU variant will have different memory regions for each CPU for each 128 storage. The non-per-CPU will have the same memory region for each storage. 133 multiple attach types, and each attach creates a fresh zeroed storage. The 136 There is a one-to-one association between the map of each type (per-CPU and 138 each map can only be used by one BPF program and each BPF program can only use 139 one storage map of each type. Because of map can only be used by one BPF 153 However, the BPF program can still only associate with one map of each type
|
/linux-6.1.9/Documentation/devicetree/bindings/powerpc/4xx/ |
D | cpm.txt | 16 - unused-units : specifier consist of one cell. For each 20 - idle-doze : specifier consist of one cell. For each 24 - standby : specifier consist of one cell. For each 28 - suspend : specifier consist of one cell. For each
|
/linux-6.1.9/Documentation/admin-guide/mm/damon/ |
D | usage.rst | 63 figure, parents-children relations are represented with indentations, each 64 directory is having ``/`` suffix, and files in each directory are separated by 107 are called DAMON context. DAMON executes each context with a kernel thread 113 of child directories named ``0`` to ``N-1``. Each directory represents each 119 In each kdamond directory, two files (``state`` and ``pid``) and one directory 127 for each DAMON-based operation scheme of the kdamond. For details of the 140 ``0`` to ``N-1``. Each directory represents each monitoring context. At the 147 In each context directory, two files (``avail_operations`` and ``operations``) 195 to ``N-1``. Each directory represents each monitoring target. 200 In each target directory, one file (``pid_target``) and one directory [all …]
|
/linux-6.1.9/Documentation/devicetree/bindings/interconnect/ |
D | qcom,rpmh-common.yaml | 17 associated with each execution environment. Provider nodes must point to at 18 least one RPMh device child node pertaining to their RSC and each provider 37 Names for each of the qcom,bcm-voters specified.
|
/linux-6.1.9/Documentation/ABI/testing/ |
D | procfs-smaps_rollup | 7 except instead of an entry for each VMA in a process, 9 for which each field is the sum of the corresponding 13 the sum of the Pss field of each type (anon, file, shmem).
|
/linux-6.1.9/tools/power/cpupower/ |
D | TODO | 17 -> Bind forked process to each cpu. 19 each cpu. 22 each cpu.
|
/linux-6.1.9/Documentation/devicetree/bindings/iio/adc/ |
D | aspeed,ast2600-adc.yaml | 14 • The device split into two individual engine and each contains 8 voltage 18 • Programmable upper and lower threshold for each channels. 19 • Interrupt when larger or less than threshold for each channels. 20 • Support hysteresis for each channels.
|
/linux-6.1.9/Documentation/leds/ |
D | leds-qcom-lpg.rst | 16 channels. The output of each PWM channel is routed to other hardware 19 The each PWM channel can operate with a period between 27us and 384 seconds and 37 therefor be identical for each element in the pattern (except for the pauses 39 transitions expected by the leds-trigger-pattern format, each entry in the 73 mode, in which case each run through the pattern is performed by first running
|
/linux-6.1.9/Documentation/devicetree/bindings/dma/ |
D | st,stm32-mdma.yaml | 13 described in the dma.txt file, using a five-cell specifier for each channel: 24 0x2: Source address pointer is incremented after each data transfer 25 0x3: Source address pointer is decremented after each data transfer 28 0x2: Destination address pointer is incremented after each data transfer 29 0x3: Destination address pointer is decremented after each data transfer
|
/linux-6.1.9/Documentation/cpu-freq/ |
D | cpufreq-stats.rst | 22 cpufreq-stats is a driver that provides CPU frequency statistics for each CPU. 25 in /sysfs (<sysfs root>/devices/system/cpu/cpuX/cpufreq/stats/) for each CPU. 65 This gives the amount of time spent in each of the frequencies supported by 66 this CPU. The cat output will have "<frequency> <time>" pair in each line, which 68 will have one line for each of the supported frequencies. usertime units here 100 also contains the actual freq values for each row and column for better
|
/linux-6.1.9/Documentation/virt/acrn/ |
D | io-request.rst | 14 For each User VM, there is a shared 4-KByte memory region used for I/O requests 20 used as an array of 16 I/O request slots with each I/O request slot being 256 27 GPA falls in a certain range. Multiple I/O clients can be associated with each 28 User VM. There is a special client associated with each User VM, called the 30 any other clients. The ACRN userspace acts as the default client for each User
|
/linux-6.1.9/Documentation/networking/ |
D | scaling.rst | 30 applying a filter to each packet that assigns it to one of a small number 31 of logical flows. Packets for each flow are steered to a separate receive 41 implementation of RSS uses a 128-entry indirection table where each entry 60 for each CPU if the device supports enough queues, or otherwise at least 61 one for each memory domain, where a memory domain is a set of CPUs that 79 that can route each interrupt to a particular CPU. The active mapping 84 affinity of each interrupt see Documentation/core-api/irq/irq-affinity.rst. Some systems 100 interrupts (and thus work) grows with each additional queue. 103 processors with hyperthreading (HT), each hyperthread is represented as 141 RPS may enqueue packets for processing. For each received packet, [all …]
|
/linux-6.1.9/Documentation/devicetree/bindings/sound/ |
D | nvidia,tegra30-ahub.txt | 8 - reg : Should contain the register physical address and length for each of 13 - clocks : Must contain an entry for each entry in clock-names. 18 - resets : Must contain an entry for each entry in reset-names. 47 - dmas : Must contain an entry for each entry in clock-names.
|