Lines Matching refs:that
21 given that they for all intents and purposes hammer every CPU that
24 The one saving grace is that the hammer has grown a bit smaller
30 of that quiescent state.
49 or a state that is reached after some time.
62 Otherwise, it sets flags so that the outermost ``rcu_read_unlock()``
65 CPUs that might have RCU read-side critical sections.
71 When that happens, RCU will enqueue the task, which will the continue to
87 | the CPUs? After all, that would avoid all those real-time-unfriendly |
97 | ``rcu_read_unlock()`` invocation, which means that the remote state |
98 | testing would not help the worst-case latency that real-time |
104 | and it would be able to safely detect that state without needing to |
108 Please note that this is just the overall flow: Additional complications
126 an RCU read-side critical section. The best that RCU-sched's
128 that the CPU went idle while the IPI was in flight. If the CPU is idle,
135 quiescent state at that time.
149 #. The number of CPUs that have ever been online is tracked by the
151 structure's ``->ncpus_snap`` field tracks the number of CPUs that
153 period. Note that this number never decreases, at least in the
155 #. The identities of the CPUs that have ever been online is tracked by
158 identities of the CPUs that were online at least once at the
162 that is, when the ``rcu_node`` structure's ``->expmaskinitnext``
167 initialize that structure's ``->expmask`` at the beginning of each
168 RCU expedited grace period. This means that only those CPUs that have
171 #. Any CPU that goes offline will clear its bit in its leaf ``rcu_node``
172 structure's ``->qsmaskinitnext`` field, so any CPU with that bit
176 #. For each non-idle CPU that RCU believes is currently online, the
178 succeeds, the CPU was fully online. Failure indicates that the CPU is
185 that CPU. However, this is likely paranoia-induced redundancy.
191 | CPUs that were once online? Why not just have a single set of masks |
201 | result in bits set at the top of the tree that have no counterparts |
203 | will result in grace-period hangs. In short, that way lies madness, |
205 | In contrast, the current multi-mask multi-counter scheme ensures that |
237 not permitted within the idle loop, if ``rcu_exp_handler()`` sees that
241 regardless of whether or not that quiescent state was due to the CPU
245 bitmask of CPUs that must be IPIed, just before sending each IPI, and
255 that a single expedited grace-period operation will cover all requests
261 an even value otherwise, so that dividing the counter value by two gives
264 indicating that a grace period has elapsed. Therefore, if the initial
281 period, and that there be an efficient way for the remaining requests to
282 wait for that grace period to complete. However, that is the topic of
317 Suppose that Task A wins, recording its desired grace-period sequence
323 up to the root ``rcu_node`` structure, and, seeing that its desired
329 | Why ``->exp_wq[1]``? Given that the value of these tasks' desired |
336 | Recall that the bottom bit of the desired sequence number indicates |
344 desired grace-period sequence number, and see that both leaf
345 ``rcu_node`` structures already have that value recorded. They will
360 Task A to finish so that it can start the next grace period. The
379 Note that three of the root ``rcu_node`` structure's waitqueues are now
391 | What happens if Task A takes so long to do its wakeups that Task E's |
413 ``schedule_work()`` (from ``_synchronize_rcu_expedited()`` so that a
419 only four sets of waitqueues, it is necessary to ensure that the
423 wakeups. The key point is that the ``->exp_mutex`` is not released until
424 the first wakeup is complete, which means that the ``->exp_wake_mutex``
425 has already been acquired at that point. This approach ensures that the
427 grace period is in process, but that these wakeups will complete before
428 the next grace period starts. This means that only three waitqueues are
429 required, guaranteeing that the four that are provided are sufficient.
442 | stalls, given that a given reader must block both normal and |
447 | Because it is quite possible that at a given time there is no normal |
462 The use of workqueues has the advantage that the expedited grace-period
464 corresponding disadvantage that workqueues cannot be used until they are
466 spawns the first task. Given that there are parts of the kernel that
470 What they do is to fall back to the old practice of requiring that the
481 The current code assumes that there are no POSIX signals during the
497 batching, so that a single grace-period operation can serve numerous
499 of a concurrent group that will request the grace period. All members of