Lines Matching refs:is
44 The return value of @code{getrusage} is zero for success, and @code{-1}
49 The argument @var{processes} is not valid.
53 One way of getting resource usage for a particular child process is with
70 The maximum resident set size used, in kilobytes. That is, the maximum
80 An integral value expressed the same way, which is the amount of
84 An integral value expressed the same way, which is the amount of
141 The current limit is the value the system will not allow usage to
142 exceed. It is also called the ``soft limit'' because the process being
148 The maximum limit is the maximum value to which a process is allowed to
149 set its current limit. It is also called the ``hard limit'' because
150 there is no way for a process to get around it. A process may lower
169 The return value is @code{0} on success and @code{-1} on failure. The
170 only possible @code{errno} error condition is @code{EFAULT}.
173 32-bit system this function is in fact @code{getrlimit64}. Thus, the
181 This function is similar to @code{getrlimit} but its second parameter is
187 32-bit machine, this function is available under the name
198 The return value is @code{0} on success and @code{-1} on failure. The
199 following @code{errno} error condition is possible:
208 The process tried to raise a maximum limit, but is not superuser.
213 32-bit system this function is in fact @code{setrlimit64}. Thus, the
221 This function is similar to @code{setrlimit} but its second parameter is
227 32-bit machine this function is available under the name
233 This structure is used with @code{getrlimit} to receive limit values,
245 For @code{getrlimit}, the structure is an output; it receives the current
249 For the LFS functions a similar type is defined in @file{sys/resource.h}.
253 This structure is analogous to the @code{rlimit} structure above, but
258 This is analogous to @code{rlimit.rlim_cur}, but with a different type.
261 This is analogous to @code{rlimit.rlim_max}, but with a different type.
266 Here is a list of resources for which you can specify a limit. Memory
273 longer than this, it gets a signal: @code{SIGXCPU}. The value is
298 file is created. So setting this limit to zero prevents core files from
304 This parameter is a guide for the system's scheduler and memory
305 allocator; the system may give the process more memory when there is a
363 If you are getting a limit, the command argument is the only argument.
364 If you are setting a limit, there is a second argument:
365 @code{long int} @var{limit} which is the value to which you are setting
385 When you successfully get a limit, the return value of @code{ulimit} is
386 that limit, which is never negative. When you successfully set a limit,
387 the return value is zero. When the function fails, the return value is
388 @code{-1} and @code{errno} is set according to the reason:
392 A process tried to increase a maximum limit, but is not superuser.
427 The return value is zero for success, and @code{-1} with @code{errno} set
445 get it. This section describes how that determination is made and
448 It is common to refer to CPU scheduling simply as scheduling and a
450 resource being implied. Bear in mind, though, that CPU time is not the
452 cases, it is not even particularly important. Giving a process a high
457 CPU scheduling is a complex issue and different systems do it in wildly
463 For simplicity, we talk about CPU contention as if there is only one CPU
466 any one time is equal to the number of CPUs, you can easily extrapolate
474 the Linux implementation is quite the inverse of what the authors of the
492 Every process has an absolute priority, and it is represented by a number.
497 absolute priority 0 and this section is irrelevant. In that case,
499 accommodate realtime systems, in which it is vital that certain processes
507 one with the higher absolute priority always gets it. This is true even if the
508 process with the lower priority is already using the CPU (i.e., the
509 scheduling is preemptive). Of course, we're only talking about
512 for something like I/O, its absolute priority is irrelevant.
515 @strong{NB:} The term ``runnable'' is a synonym for ``ready to run.''
519 CPU is determined by the scheduling policy. If the processes have
540 tell you what the range is on a particular system.
545 One thing you must keep in mind when designing real time applications is
550 Interrupt handlers live in that limbo between processes. The CPU is
559 processes get to run while the page faults in is of no consequence,
560 because as soon as the I/O is complete, the higher priority process will
572 order to run. The errant program is in complete control. It controls
585 and lowers it when the process is exceeding it.
587 @strong{NB:} The absolute priority is sometimes called the ``static
600 is as described in this section.
603 the decision is much simpler, and is described in @ref{Absolute
616 The most sensible case is where all the processes with a certain
631 careful control of interrupts and page faults, is the one to use when a
643 In both cases, the ready to run list is organized as a true queue, where
644 a process gets pushed onto the tail when it becomes ready to run and is
647 scheduler runs a process, that process is no longer ready to run and no
651 The only difference between a process that is assigned the Round Robin
652 scheduling policy and a process that is assigned First Come First Serve
653 is that in the former case, the process is automatically booted off the
656 The time quantum we're talking about is small. Really small. This is
658 round robin time slice is a thousand times shorter than its typical
674 section, the macro _POSIX_PRIORITY_SCHEDULING is defined in
677 For the case that the scheduling policy is traditional scheduling, more
683 on systems that use @theglibc{} is the inverse of what the POSIX
685 scheduling parameter is the scheduling policy and that the priority
686 value, if any, is a parameter of the scheduling policy. In the
687 implementation, though, the priority value is king and the scheduling
696 to POSIX. This is why the following description refers to tasks and
719 or the calling task if @var{pid} is zero. If @var{policy} is
735 @c the effect is, but it must be subtle.
737 On success, the return value is @code{0}. Otherwise, it is @code{-1}
738 and @code{ERRNO} is set accordingly. The @code{errno} values specific
746 @var{policy} is not @code{SCHED_OTHER} (or it's negative and the
747 existing policy is not @code{SCHED_OTHER}.
751 owner is not the target task's owner. I.e., the effective uid of the
752 calling task is neither the effective nor the real uid of task
758 There is no task with pid @var{pid} and @var{pid} is not zero.
766 The absolute priority value identified by *@var{param} is outside the
768 scheduling policy if @var{policy} is negative) or @var{param} is
770 tell you what the valid range is.
773 @var{pid} is negative.
786 ID @var{pid}, or the calling task if @var{pid} is zero.
788 The return value is the scheduling policy. See
791 If the function fails, the return value is instead @code{-1} and
792 @code{errno} is set accordingly.
799 There is no task with pid @var{pid} and it is not zero.
802 @var{pid} is negative.
806 Note that this function is not an exact mate to @code{sched_setscheduler}
821 It is functionally identical to @code{sched_setscheduler} with
835 @var{pid} is the task ID of the task whose absolute priority you want
838 @var{param} is a pointer to a structure in which the function stores the
841 On success, the return value is @code{0}. Otherwise, it is @code{-1}
842 and @code{errno} is set accordingly. The @code{errno} values specific
848 There is no task with ID @var{pid} and it is not zero.
851 @var{pid} is negative.
863 This function returns the lowest absolute priority value that is
866 On Linux, it is 0 for SCHED_OTHER and 1 for everything else.
868 On success, the return value is @code{0}. Otherwise, it is @code{-1}
869 and @code{ERRNO} is set accordingly. The @code{errno} values specific
884 This function returns the highest absolute priority value that is
887 On Linux, it is 0 for SCHED_OTHER and 99 for everything else.
889 On success, the return value is @code{0}. Otherwise, it is @code{-1}
890 and @code{ERRNO} is set accordingly. The @code{errno} values specific
906 the Round Robin scheduling policy, if it is used, for the task with
910 @c We need a cross-reference to where timespec is explained. But that
912 @c reorganized so there is a place to put it (which will be right next
913 @c to timeval, which is presently misplaced). 2000.05.07.
915 With a Linux kernel, the round robin time slice is always 150
918 The return value is @code{0} on success and in the pathological case
919 that it fails, the return value is @code{-1} and @code{errno} is set
920 accordingly. There is nothing specific that can go wrong with this
933 immediately ready to run (as opposed to running, which is what it was
937 turn next arrives. If its absolute priority is 0, it is more
944 To the extent that the containing program is oblivious to what other
948 The return value is @code{0} on success and in the pathological case
949 that it fails, the return value is @code{-1} and @code{errno} is set
950 accordingly. There is nothing specific that can go wrong with this
959 This section is about the scheduling among processes whose absolute
960 priority is 0. When the system hands out the scraps of CPU time that
995 over time. The dynamic priority is meaningless for processes with
1002 In Linux, the value is a combination of these things, but mostly it
1006 something like wait for I/O, it is favored for getting the CPU back when
1008 selection of processes for new time slices is basically round robin.
1010 process' dynamic priority rises every time it is snubbed in the
1013 The fluctuation of a process' dynamic priority is regulated by another
1014 value: The ``nice'' value. The nice value is an integer, usually in the
1023 The idea of the nice value is deferential courtesy. In the beginning,
1028 Hence, the higher a process' nice value, the nicer the process is.
1085 On success, the return value is @code{0}. Otherwise, it is @code{-1}
1086 and @code{errno} is set accordingly. The @code{errno} values specific
1095 The value of @var{class} is not valid.
1098 If the return value is @code{-1}, it could indicate failure, or it could
1099 be the nice value. The only way to make certain is to set @code{errno =
1112 The return value is @code{0} on success, and @code{-1} on
1122 The value of @var{class} is not valid.
1125 The call would set the nice value of a process which is owned by a different
1144 One particular process. The argument @var{id} is a process ID (pid).
1148 All the processes in a particular process group. The argument @var{id} is
1154 indicates the user). The argument @var{id} is a user ID (uid).
1157 If the argument @var{id} is 0, it stands for the calling process, its
1168 The return value is the new nice value on success, and @code{-1} on
1173 Here is an equivalent definition of @code{nice}:
1198 executes which process or thread is not covered.
1205 One thread or process is responsible for absolutely critical work
1209 other process or thread is allowed to use.
1213 from different CPUs. This is the case in NUMA (Non-Uniform Memory
1215 but this requirement is usually not visible to the scheduler.
1222 instance garbage collection) is performance local to processors. This
1227 The POSIX standard up to this date is of not much help to solve this
1236 This data set is a bitset where each bit represents a CPU. How the
1237 system's CPUs are mapped to bits in the bitset is system dependent.
1242 This type is a GNU extension and is defined in @file{sched.h}.
1247 it is important to never exceed the size of the bitset. The following
1252 The value of this macro is the maximum number of CPUs which can be
1267 This macro is a GNU extension and is defined in @file{sched.h}.
1279 The @var{cpu} parameter must not have side effects since it is
1282 This macro is a GNU extension and is defined in @file{sched.h}.
1294 The @var{cpu} parameter must not have side effects since it is
1297 This macro is a GNU extension and is defined in @file{sched.h}.
1307 This macro returns a nonzero value (true) if @var{cpu} is a member
1310 The @var{cpu} parameter must not have side effects since it is
1313 This macro is a GNU extension and is defined in @file{sched.h}.
1333 and @code{errno} is set to represent the error condition.
1343 This function is a GNU extension and is declared in @file{sched.h}.
1346 Note that it is not portably possible to use this information to
1361 If the function fails it will return @code{-1} and @code{errno} is set
1372 The bitset is not valid. This might mean that the affinity set might
1376 This function is a GNU extension and is declared in @file{sched.h}.
1383 the calling thread or process is currently running and writes them into
1385 processor is a unique nonnegative integer identifying a CPU. The node
1386 is a unique nonnegative integer identifying a NUMA node. When either
1387 @var{cpu} or @var{node} is @code{NULL}, nothing is written to the
1390 The return value is @code{0} on success and @code{-1} on failure. The
1391 following @code{errno} error condition is defined for this function:
1398 This function is Linux-specific and is declared in @file{sched.h}.
1404 The amount of memory available in the system and the way it is organized
1406 functions like @code{mmap} it is necessary to know about the size of
1407 individual memory pages and knowing how much memory is available enables
1427 data. An extra level of indirection is introduced which translates
1428 virtual addresses into physical addresses. This is normally done by the
1433 is process isolation. The different processes running on the system
1435 the address space of another process (except when shared memory is used
1436 but then it is wanted and controlled).
1438 Another advantage of virtual memory is that the address space the
1441 where the content of currently unused memory regions is stored. The
1447 memory of all the processes is larger than the available physical memory
1449 memory content from the memory to the storage media and back. This is
1455 A final aspect of virtual memory which is important and follows from
1456 what is said in the last paragraph is the granularity of the virtual
1461 together and form a @dfn{page}. The size of each page is always a power
1462 of two bytes. The smallest page size in use today is 4096, with 8192,
1468 The page size of the virtual memory the process sees is essential to
1471 information adjusted to the page size. In the case of @code{mmap} it is
1472 necessary to provide a length argument which is a multiple of the page
1473 size. Another place where the knowledge about the page size is useful
1474 is in memory allocation. If one allocates pieces of memory in larger
1475 chunks which are then subdivided by the application code it is useful to
1477 memory requirement for the block is close (but not larger) to a multiple
1480 this optimization it is necessary to know a bit about the memory
1492 The correct interface to query about the page size is @code{sysconf}
1494 There is a much older interface available, too.
1499 @c Obtained from the aux vec at program startup time. GNU/Linux/m68k is
1502 This value is fixed for the runtime of the process but can vary in
1505 The function is declared in @file{unistd.h}.
1508 Widely available on @w{System V} derived systems is a method to get
1519 This does not mean all this memory is available. This information can
1529 @code{_SC_AVPHYS_PAGES} is the amount of memory the application can use
1532 @code{_SC_PHYS_PAGES} is more or less a hard limit for the working set.
1534 memory the system is in trouble.
1549 This function is a GNU extension.
1559 This function is a GNU extension.
1567 the task can be parallelized the optimal way to write an application is
1605 This function is a GNU extension.
1614 This function is a GNU extension.
1620 @dfn{load average}. This is a number indicating how many processes were
1621 running. This number is an average over different periods of time
1633 three elements. The return value is the number of elements written to
1636 This function is declared in @file{stdlib.h}.