Searched refs:preemption (Results 1 – 25 of 62) sorted by relevance
123
/linux-6.6.21/Documentation/locking/ |
D | preempt-locking.rst | 35 protect these situations by disabling preemption around them. 37 You can also use put_cpu() and get_cpu(), which will disable preemption. 44 Under preemption, the state of the CPU must be protected. This is arch- 47 section that must occur while preemption is disabled. Think what would happen 50 upon preemption, the FPU registers will be sold to the lowest bidder. Thus, 51 preemption must be disabled around such regions. 54 kernel_fpu_begin and kernel_fpu_end will disable and enable preemption. 72 Data protection under preemption is achieved by disabling preemption for the 84 n-times in a code path, and preemption will not be reenabled until the n-th 86 preemption is not enabled. [all …]
|
D | locktypes.rst | 59 preemption and interrupt disabling primitives. Contrary to other locking 60 mechanisms, disabling preemption or interrupts are pure CPU local 76 Spinning locks implicitly disable preemption and the lock / unlock functions 103 PI has limitations on non-PREEMPT_RT kernels due to preemption and 106 PI clearly cannot preempt preemption-disabled or interrupt-disabled 162 by disabling preemption or interrupts. 164 On non-PREEMPT_RT kernels local_lock operations map to the preemption and 200 local_lock should be used in situations where disabling preemption or 204 local_lock is not suitable to protect against preemption or interrupts on a 217 preemption or interrupts is required, for example, to safely access [all …]
|
D | seqlock.rst | 47 preemption, preemption must be explicitly disabled before entering the 72 /* Serialized context with disabled preemption */ 107 For lock types which do not implicitly disable preemption, preemption
|
D | hwspinlock.rst | 95 Upon a successful return from this function, preemption is disabled so 111 Upon a successful return from this function, preemption and the local 127 Upon a successful return from this function, preemption is disabled, 178 Upon a successful return from this function, preemption is disabled so 195 Upon a successful return from this function, preemption and the local 211 Upon a successful return from this function, preemption is disabled, 268 Upon a successful return from this function, preemption and local 280 Upon a successful return from this function, preemption is reenabled,
|
D | ww-mutex-design.rst | 53 running transaction. Note that this is not the same as process preemption. A 350 The Wound-Wait preemption is implemented with a lazy-preemption scheme: 354 wounded status and retries. A great benefit of implementing preemption in
|
/linux-6.6.21/kernel/ |
D | Kconfig.preempt | 22 This is the traditional Linux preemption model, geared towards 38 "explicit preemption points" to the kernel code. These new 39 preemption points have been selected to reduce the maximum 61 otherwise not be about to reach a natural preemption point. 103 This option allows to define the preemption model on the kernel 104 command line parameter and thus override the default preemption
|
/linux-6.6.21/Documentation/core-api/ |
D | entry.rst | 167 irq_enter_rcu() updates the preemption count which makes in_hardirq() 172 irq_exit_rcu() handles interrupt time accounting, undoes the preemption 175 In theory, the preemption count could be updated in irqentry_enter(). In 176 practice, deferring this update to irq_enter_rcu() allows the preemption-count 180 preemption count has not yet been updated with the HARDIRQ_OFFSET state. 182 Note that irq_exit_rcu() must remove HARDIRQ_OFFSET from the preemption count 185 also requires that HARDIRQ_OFFSET has been removed from the preemption count. 223 Note that the update of the preemption counter has to be the first 226 preemption count modification in the NMI entry/exit case must not be
|
D | local_ops.rst | 42 making sure that we modify it from within a preemption safe context. It is 76 preemption already disabled. I suggest, however, to explicitly 77 disable preemption anyway to make sure it will still work correctly on 104 local atomic operations: it makes sure that preemption is disabled around write 110 If you are already in a preemption-safe context, you can use 161 * preemptible context (it disables preemption) :
|
D | this_cpu_ops.rst | 20 necessary to disable preemption or interrupts to ensure that the 44 The following this_cpu() operations with implied preemption protection 46 preemption and interrupts:: 110 reserved for a specific processor. Without disabling preemption in the 142 preemption has been disabled. The pointer is then used to 143 access local per cpu data in a critical section. When preemption 230 preemption. If a per cpu variable is not used in an interrupt context
|
/linux-6.6.21/Documentation/trace/rv/ |
D | monitor_wip.rst | 13 preemption disabled:: 30 The wakeup event always takes place with preemption disabled because
|
/linux-6.6.21/Documentation/RCU/ |
D | NMI-RCU.rst | 45 The do_nmi() function processes each NMI. It first disables preemption 50 preemption is restored. 95 CPUs complete any preemption-disabled segments of code that they were 97 Since NMI handlers disable preemption, synchronize_rcu() is guaranteed
|
D | rcubarrier.rst | 330 disables preemption, which acted as an RCU read-side critical 367 Therefore, on_each_cpu() disables preemption across its call 370 preemption-disabled regions of code as RCU read-side critical 376 But if on_each_cpu() ever decides to forgo disabling preemption,
|
/linux-6.6.21/Documentation/virt/kvm/devices/ |
D | arm-vgic.rst | 99 maximum possible 128 preemption levels. The semantics of the register 100 indicate if any interrupts in a given preemption level are in the active 103 Thus, preemption level X has one or more active interrupts if and only if: 107 Bits for undefined preemption levels are RAZ/WI.
|
/linux-6.6.21/arch/arc/kernel/ |
D | entry-compact.S | 152 ; if L2 IRQ interrupted a L1 ISR, disable preemption 157 ; -preemption off IRQ, user task in syscall picked to run 172 ; bump thread_info->preempt_count (Disable preemption) 352 ; decrement thread_info->preempt_count (re-enable preemption)
|
D | entry.S | 329 ; Can't preempt if preemption disabled
|
/linux-6.6.21/Documentation/tools/rtla/ |
D | common_osnoise_description.rst | 3 time in a loop while with preemption, softirq and IRQs enabled, thus
|
/linux-6.6.21/Documentation/tools/rv/ |
D | rv-mon-wip.rst | 21 checks if the wakeup events always take place with preemption disabled.
|
/linux-6.6.21/Documentation/mm/ |
D | highmem.rst | 66 CPU while the mapping is active. Although preemption is never disabled by 73 As said, pagefaults and preemption are never disabled. There is no need to 74 disable preemption because, when context switches to a different task, the 110 effects of atomic mappings, i.e. disabling page faults or preemption, or both. 141 restrictions on preemption or migration. It comes with an overhead as mapping
|
/linux-6.6.21/drivers/gpu/drm/i915/ |
D | Kconfig.profile | 59 How long to wait (in milliseconds) for a preemption event to occur 77 How long to wait (in milliseconds) for a preemption event to occur
|
/linux-6.6.21/Documentation/translations/zh_CN/core-api/ |
D | local_ops.rst | 155 * preemptible context (it disables preemption) :
|
/linux-6.6.21/Documentation/arch/arm/ |
D | kernel_mode_neon.rst | 14 preemption disabled 58 * NEON/VFP code is executed with preemption disabled.
|
/linux-6.6.21/include/rdma/ |
D | opa_port_info.h | 321 } preemption; member
|
/linux-6.6.21/Documentation/gpu/rfc/ |
D | i915_scheduler.rst | 43 * Features like timeslicing / preemption / virtual engines would 56 preemption, timeslicing, etc... so it is possible for jobs to
|
/linux-6.6.21/Documentation/driver-api/ |
D | io-mapping.rst | 53 io_mapping_map_atomic_wc() has the side effect of disabling preemption and
|
/linux-6.6.21/Documentation/trace/ |
D | osnoise-tracer.rst | 29 similar loop with preemption, SoftIRQs and IRQs enabled, thus allowing 129 - OSNOISE_PREEMPT_DISABLE: disable preemption while running the osnoise
|
123