1# SPDX-License-Identifier: GPL-2.0-only 2# 3# Architectures that offer an FUNCTION_TRACER implementation should 4# select HAVE_FUNCTION_TRACER: 5# 6 7config USER_STACKTRACE_SUPPORT 8 bool 9 10config NOP_TRACER 11 bool 12 13config HAVE_RETHOOK 14 bool 15 16config RETHOOK 17 bool 18 depends on HAVE_RETHOOK 19 help 20 Enable generic return hooking feature. This is an internal 21 API, which will be used by other function-entry hooking 22 features like fprobe and kprobes. 23 24config HAVE_FUNCTION_TRACER 25 bool 26 help 27 See Documentation/trace/ftrace-design.rst 28 29config HAVE_FUNCTION_GRAPH_TRACER 30 bool 31 help 32 See Documentation/trace/ftrace-design.rst 33 34config HAVE_DYNAMIC_FTRACE 35 bool 36 help 37 See Documentation/trace/ftrace-design.rst 38 39config HAVE_DYNAMIC_FTRACE_WITH_REGS 40 bool 41 42config HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 43 bool 44 45config HAVE_DYNAMIC_FTRACE_WITH_ARGS 46 bool 47 help 48 If this is set, then arguments and stack can be found from 49 the pt_regs passed into the function callback regs parameter 50 by default, even without setting the REGS flag in the ftrace_ops. 51 This allows for use of regs_get_kernel_argument() and 52 kernel_stack_pointer(). 53 54config HAVE_FTRACE_MCOUNT_RECORD 55 bool 56 help 57 See Documentation/trace/ftrace-design.rst 58 59config HAVE_SYSCALL_TRACEPOINTS 60 bool 61 help 62 See Documentation/trace/ftrace-design.rst 63 64config HAVE_FENTRY 65 bool 66 help 67 Arch supports the gcc options -pg with -mfentry 68 69config HAVE_NOP_MCOUNT 70 bool 71 help 72 Arch supports the gcc options -pg with -mrecord-mcount and -nop-mcount 73 74config HAVE_OBJTOOL_MCOUNT 75 bool 76 help 77 Arch supports objtool --mcount 78 79config HAVE_C_RECORDMCOUNT 80 bool 81 help 82 C version of recordmcount available? 83 84config HAVE_BUILDTIME_MCOUNT_SORT 85 bool 86 help 87 An architecture selects this if it sorts the mcount_loc section 88 at build time. 89 90config BUILDTIME_MCOUNT_SORT 91 bool 92 default y 93 depends on HAVE_BUILDTIME_MCOUNT_SORT && DYNAMIC_FTRACE 94 help 95 Sort the mcount_loc section at build time. 96 97config TRACER_MAX_TRACE 98 bool 99 100config TRACE_CLOCK 101 bool 102 103config RING_BUFFER 104 bool 105 select TRACE_CLOCK 106 select IRQ_WORK 107 108config EVENT_TRACING 109 select CONTEXT_SWITCH_TRACER 110 select GLOB 111 bool 112 113config CONTEXT_SWITCH_TRACER 114 bool 115 116config RING_BUFFER_ALLOW_SWAP 117 bool 118 help 119 Allow the use of ring_buffer_swap_cpu. 120 Adds a very slight overhead to tracing when enabled. 121 122config PREEMPTIRQ_TRACEPOINTS 123 bool 124 depends on TRACE_PREEMPT_TOGGLE || TRACE_IRQFLAGS 125 select TRACING 126 default y 127 help 128 Create preempt/irq toggle tracepoints if needed, so that other parts 129 of the kernel can use them to generate or add hooks to them. 130 131# All tracer options should select GENERIC_TRACER. For those options that are 132# enabled by all tracers (context switch and event tracer) they select TRACING. 133# This allows those options to appear when no other tracer is selected. But the 134# options do not appear when something else selects it. We need the two options 135# GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the 136# hiding of the automatic options. 137 138config TRACING 139 bool 140 select RING_BUFFER 141 select STACKTRACE if STACKTRACE_SUPPORT 142 select TRACEPOINTS 143 select NOP_TRACER 144 select BINARY_PRINTF 145 select EVENT_TRACING 146 select TRACE_CLOCK 147 select TASKS_RCU if PREEMPTION 148 149config GENERIC_TRACER 150 bool 151 select TRACING 152 153# 154# Minimum requirements an architecture has to meet for us to 155# be able to offer generic tracing facilities: 156# 157config TRACING_SUPPORT 158 bool 159 depends on TRACE_IRQFLAGS_SUPPORT 160 depends on STACKTRACE_SUPPORT 161 default y 162 163menuconfig FTRACE 164 bool "Tracers" 165 depends on TRACING_SUPPORT 166 default y if DEBUG_KERNEL 167 help 168 Enable the kernel tracing infrastructure. 169 170if FTRACE 171 172config BOOTTIME_TRACING 173 bool "Boot-time Tracing support" 174 depends on TRACING 175 select BOOT_CONFIG 176 help 177 Enable developer to setup ftrace subsystem via supplemental 178 kernel cmdline at boot time for debugging (tracing) driver 179 initialization and boot process. 180 181config FUNCTION_TRACER 182 bool "Kernel Function Tracer" 183 depends on HAVE_FUNCTION_TRACER 184 select KALLSYMS 185 select GENERIC_TRACER 186 select CONTEXT_SWITCH_TRACER 187 select GLOB 188 select TASKS_RCU if PREEMPTION 189 select TASKS_RUDE_RCU 190 help 191 Enable the kernel to trace every kernel function. This is done 192 by using a compiler feature to insert a small, 5-byte No-Operation 193 instruction at the beginning of every kernel function, which NOP 194 sequence is then dynamically patched into a tracer call when 195 tracing is enabled by the administrator. If it's runtime disabled 196 (the bootup default), then the overhead of the instructions is very 197 small and not measurable even in micro-benchmarks (at least on 198 x86, but may have impact on other architectures). 199 200config FUNCTION_GRAPH_TRACER 201 bool "Kernel Function Graph Tracer" 202 depends on HAVE_FUNCTION_GRAPH_TRACER 203 depends on FUNCTION_TRACER 204 depends on !X86_32 || !CC_OPTIMIZE_FOR_SIZE 205 default y 206 help 207 Enable the kernel to trace a function at both its return 208 and its entry. 209 Its first purpose is to trace the duration of functions and 210 draw a call graph for each thread with some information like 211 the return value. This is done by setting the current return 212 address on the current task structure into a stack of calls. 213 214config DYNAMIC_FTRACE 215 bool "enable/disable function tracing dynamically" 216 depends on FUNCTION_TRACER 217 depends on HAVE_DYNAMIC_FTRACE 218 default y 219 help 220 This option will modify all the calls to function tracing 221 dynamically (will patch them out of the binary image and 222 replace them with a No-Op instruction) on boot up. During 223 compile time, a table is made of all the locations that ftrace 224 can function trace, and this table is linked into the kernel 225 image. When this is enabled, functions can be individually 226 enabled, and the functions not enabled will not affect 227 performance of the system. 228 229 See the files in /sys/kernel/debug/tracing: 230 available_filter_functions 231 set_ftrace_filter 232 set_ftrace_notrace 233 234 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but 235 otherwise has native performance as long as no tracing is active. 236 237config DYNAMIC_FTRACE_WITH_REGS 238 def_bool y 239 depends on DYNAMIC_FTRACE 240 depends on HAVE_DYNAMIC_FTRACE_WITH_REGS 241 242config DYNAMIC_FTRACE_WITH_DIRECT_CALLS 243 def_bool y 244 depends on DYNAMIC_FTRACE_WITH_REGS 245 depends on HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 246 247config DYNAMIC_FTRACE_WITH_ARGS 248 def_bool y 249 depends on DYNAMIC_FTRACE 250 depends on HAVE_DYNAMIC_FTRACE_WITH_ARGS 251 252config FPROBE 253 bool "Kernel Function Probe (fprobe)" 254 depends on FUNCTION_TRACER 255 depends on DYNAMIC_FTRACE_WITH_REGS 256 depends on HAVE_RETHOOK 257 select RETHOOK 258 default n 259 help 260 This option enables kernel function probe (fprobe) based on ftrace. 261 The fprobe is similar to kprobes, but probes only for kernel function 262 entries and exits. This also can probe multiple functions by one 263 fprobe. 264 265 If unsure, say N. 266 267config FUNCTION_PROFILER 268 bool "Kernel function profiler" 269 depends on FUNCTION_TRACER 270 default n 271 help 272 This option enables the kernel function profiler. A file is created 273 in debugfs called function_profile_enabled which defaults to zero. 274 When a 1 is echoed into this file profiling begins, and when a 275 zero is entered, profiling stops. A "functions" file is created in 276 the trace_stat directory; this file shows the list of functions that 277 have been hit and their counters. 278 279 If in doubt, say N. 280 281config STACK_TRACER 282 bool "Trace max stack" 283 depends on HAVE_FUNCTION_TRACER 284 select FUNCTION_TRACER 285 select STACKTRACE 286 select KALLSYMS 287 help 288 This special tracer records the maximum stack footprint of the 289 kernel and displays it in /sys/kernel/debug/tracing/stack_trace. 290 291 This tracer works by hooking into every function call that the 292 kernel executes, and keeping a maximum stack depth value and 293 stack-trace saved. If this is configured with DYNAMIC_FTRACE 294 then it will not have any overhead while the stack tracer 295 is disabled. 296 297 To enable the stack tracer on bootup, pass in 'stacktrace' 298 on the kernel command line. 299 300 The stack tracer can also be enabled or disabled via the 301 sysctl kernel.stack_tracer_enabled 302 303 Say N if unsure. 304 305config TRACE_PREEMPT_TOGGLE 306 bool 307 help 308 Enables hooks which will be called when preemption is first disabled, 309 and last enabled. 310 311config IRQSOFF_TRACER 312 bool "Interrupts-off Latency Tracer" 313 default n 314 depends on TRACE_IRQFLAGS_SUPPORT 315 select TRACE_IRQFLAGS 316 select GENERIC_TRACER 317 select TRACER_MAX_TRACE 318 select RING_BUFFER_ALLOW_SWAP 319 select TRACER_SNAPSHOT 320 select TRACER_SNAPSHOT_PER_CPU_SWAP 321 help 322 This option measures the time spent in irqs-off critical 323 sections, with microsecond accuracy. 324 325 The default measurement method is a maximum search, which is 326 disabled by default and can be runtime (re-)started 327 via: 328 329 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency 330 331 (Note that kernel size and overhead increase with this option 332 enabled. This option and the preempt-off timing option can be 333 used together or separately.) 334 335config PREEMPT_TRACER 336 bool "Preemption-off Latency Tracer" 337 default n 338 depends on PREEMPTION 339 select GENERIC_TRACER 340 select TRACER_MAX_TRACE 341 select RING_BUFFER_ALLOW_SWAP 342 select TRACER_SNAPSHOT 343 select TRACER_SNAPSHOT_PER_CPU_SWAP 344 select TRACE_PREEMPT_TOGGLE 345 help 346 This option measures the time spent in preemption-off critical 347 sections, with microsecond accuracy. 348 349 The default measurement method is a maximum search, which is 350 disabled by default and can be runtime (re-)started 351 via: 352 353 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency 354 355 (Note that kernel size and overhead increase with this option 356 enabled. This option and the irqs-off timing option can be 357 used together or separately.) 358 359config SCHED_TRACER 360 bool "Scheduling Latency Tracer" 361 select GENERIC_TRACER 362 select CONTEXT_SWITCH_TRACER 363 select TRACER_MAX_TRACE 364 select TRACER_SNAPSHOT 365 help 366 This tracer tracks the latency of the highest priority task 367 to be scheduled in, starting from the point it has woken up. 368 369config HWLAT_TRACER 370 bool "Tracer to detect hardware latencies (like SMIs)" 371 select GENERIC_TRACER 372 help 373 This tracer, when enabled will create one or more kernel threads, 374 depending on what the cpumask file is set to, which each thread 375 spinning in a loop looking for interruptions caused by 376 something other than the kernel. For example, if a 377 System Management Interrupt (SMI) takes a noticeable amount of 378 time, this tracer will detect it. This is useful for testing 379 if a system is reliable for Real Time tasks. 380 381 Some files are created in the tracing directory when this 382 is enabled: 383 384 hwlat_detector/width - time in usecs for how long to spin for 385 hwlat_detector/window - time in usecs between the start of each 386 iteration 387 388 A kernel thread is created that will spin with interrupts disabled 389 for "width" microseconds in every "window" cycle. It will not spin 390 for "window - width" microseconds, where the system can 391 continue to operate. 392 393 The output will appear in the trace and trace_pipe files. 394 395 When the tracer is not running, it has no affect on the system, 396 but when it is running, it can cause the system to be 397 periodically non responsive. Do not run this tracer on a 398 production system. 399 400 To enable this tracer, echo in "hwlat" into the current_tracer 401 file. Every time a latency is greater than tracing_thresh, it will 402 be recorded into the ring buffer. 403 404config OSNOISE_TRACER 405 bool "OS Noise tracer" 406 select GENERIC_TRACER 407 help 408 In the context of high-performance computing (HPC), the Operating 409 System Noise (osnoise) refers to the interference experienced by an 410 application due to activities inside the operating system. In the 411 context of Linux, NMIs, IRQs, SoftIRQs, and any other system thread 412 can cause noise to the system. Moreover, hardware-related jobs can 413 also cause noise, for example, via SMIs. 414 415 The osnoise tracer leverages the hwlat_detector by running a similar 416 loop with preemption, SoftIRQs and IRQs enabled, thus allowing all 417 the sources of osnoise during its execution. The osnoise tracer takes 418 note of the entry and exit point of any source of interferences, 419 increasing a per-cpu interference counter. It saves an interference 420 counter for each source of interference. The interference counter for 421 NMI, IRQs, SoftIRQs, and threads is increased anytime the tool 422 observes these interferences' entry events. When a noise happens 423 without any interference from the operating system level, the 424 hardware noise counter increases, pointing to a hardware-related 425 noise. In this way, osnoise can account for any source of 426 interference. At the end of the period, the osnoise tracer prints 427 the sum of all noise, the max single noise, the percentage of CPU 428 available for the thread, and the counters for the noise sources. 429 430 In addition to the tracer, a set of tracepoints were added to 431 facilitate the identification of the osnoise source. 432 433 The output will appear in the trace and trace_pipe files. 434 435 To enable this tracer, echo in "osnoise" into the current_tracer 436 file. 437 438config TIMERLAT_TRACER 439 bool "Timerlat tracer" 440 select OSNOISE_TRACER 441 select GENERIC_TRACER 442 help 443 The timerlat tracer aims to help the preemptive kernel developers 444 to find sources of wakeup latencies of real-time threads. 445 446 The tracer creates a per-cpu kernel thread with real-time priority. 447 The tracer thread sets a periodic timer to wakeup itself, and goes 448 to sleep waiting for the timer to fire. At the wakeup, the thread 449 then computes a wakeup latency value as the difference between 450 the current time and the absolute time that the timer was set 451 to expire. 452 453 The tracer prints two lines at every activation. The first is the 454 timer latency observed at the hardirq context before the 455 activation of the thread. The second is the timer latency observed 456 by the thread, which is the same level that cyclictest reports. The 457 ACTIVATION ID field serves to relate the irq execution to its 458 respective thread execution. 459 460 The tracer is build on top of osnoise tracer, and the osnoise: 461 events can be used to trace the source of interference from NMI, 462 IRQs and other threads. It also enables the capture of the 463 stacktrace at the IRQ context, which helps to identify the code 464 path that can cause thread delay. 465 466config MMIOTRACE 467 bool "Memory mapped IO tracing" 468 depends on HAVE_MMIOTRACE_SUPPORT && PCI 469 select GENERIC_TRACER 470 help 471 Mmiotrace traces Memory Mapped I/O access and is meant for 472 debugging and reverse engineering. It is called from the ioremap 473 implementation and works via page faults. Tracing is disabled by 474 default and can be enabled at run-time. 475 476 See Documentation/trace/mmiotrace.rst. 477 If you are not helping to develop drivers, say N. 478 479config ENABLE_DEFAULT_TRACERS 480 bool "Trace process context switches and events" 481 depends on !GENERIC_TRACER 482 select TRACING 483 help 484 This tracer hooks to various trace points in the kernel, 485 allowing the user to pick and choose which trace point they 486 want to trace. It also includes the sched_switch tracer plugin. 487 488config FTRACE_SYSCALLS 489 bool "Trace syscalls" 490 depends on HAVE_SYSCALL_TRACEPOINTS 491 select GENERIC_TRACER 492 select KALLSYMS 493 help 494 Basic tracer to catch the syscall entry and exit events. 495 496config TRACER_SNAPSHOT 497 bool "Create a snapshot trace buffer" 498 select TRACER_MAX_TRACE 499 help 500 Allow tracing users to take snapshot of the current buffer using the 501 ftrace interface, e.g.: 502 503 echo 1 > /sys/kernel/debug/tracing/snapshot 504 cat snapshot 505 506config TRACER_SNAPSHOT_PER_CPU_SWAP 507 bool "Allow snapshot to swap per CPU" 508 depends on TRACER_SNAPSHOT 509 select RING_BUFFER_ALLOW_SWAP 510 help 511 Allow doing a snapshot of a single CPU buffer instead of a 512 full swap (all buffers). If this is set, then the following is 513 allowed: 514 515 echo 1 > /sys/kernel/debug/tracing/per_cpu/cpu2/snapshot 516 517 After which, only the tracing buffer for CPU 2 was swapped with 518 the main tracing buffer, and the other CPU buffers remain the same. 519 520 When this is enabled, this adds a little more overhead to the 521 trace recording, as it needs to add some checks to synchronize 522 recording with swaps. But this does not affect the performance 523 of the overall system. This is enabled by default when the preempt 524 or irq latency tracers are enabled, as those need to swap as well 525 and already adds the overhead (plus a lot more). 526 527config TRACE_BRANCH_PROFILING 528 bool 529 select GENERIC_TRACER 530 531choice 532 prompt "Branch Profiling" 533 default BRANCH_PROFILE_NONE 534 help 535 The branch profiling is a software profiler. It will add hooks 536 into the C conditionals to test which path a branch takes. 537 538 The likely/unlikely profiler only looks at the conditions that 539 are annotated with a likely or unlikely macro. 540 541 The "all branch" profiler will profile every if-statement in the 542 kernel. This profiler will also enable the likely/unlikely 543 profiler. 544 545 Either of the above profilers adds a bit of overhead to the system. 546 If unsure, choose "No branch profiling". 547 548config BRANCH_PROFILE_NONE 549 bool "No branch profiling" 550 help 551 No branch profiling. Branch profiling adds a bit of overhead. 552 Only enable it if you want to analyse the branching behavior. 553 Otherwise keep it disabled. 554 555config PROFILE_ANNOTATED_BRANCHES 556 bool "Trace likely/unlikely profiler" 557 select TRACE_BRANCH_PROFILING 558 help 559 This tracer profiles all likely and unlikely macros 560 in the kernel. It will display the results in: 561 562 /sys/kernel/debug/tracing/trace_stat/branch_annotated 563 564 Note: this will add a significant overhead; only turn this 565 on if you need to profile the system's use of these macros. 566 567config PROFILE_ALL_BRANCHES 568 bool "Profile all if conditionals" if !FORTIFY_SOURCE 569 select TRACE_BRANCH_PROFILING 570 help 571 This tracer profiles all branch conditions. Every if () 572 taken in the kernel is recorded whether it hit or miss. 573 The results will be displayed in: 574 575 /sys/kernel/debug/tracing/trace_stat/branch_all 576 577 This option also enables the likely/unlikely profiler. 578 579 This configuration, when enabled, will impose a great overhead 580 on the system. This should only be enabled when the system 581 is to be analyzed in much detail. 582endchoice 583 584config TRACING_BRANCHES 585 bool 586 help 587 Selected by tracers that will trace the likely and unlikely 588 conditions. This prevents the tracers themselves from being 589 profiled. Profiling the tracing infrastructure can only happen 590 when the likelys and unlikelys are not being traced. 591 592config BRANCH_TRACER 593 bool "Trace likely/unlikely instances" 594 depends on TRACE_BRANCH_PROFILING 595 select TRACING_BRANCHES 596 help 597 This traces the events of likely and unlikely condition 598 calls in the kernel. The difference between this and the 599 "Trace likely/unlikely profiler" is that this is not a 600 histogram of the callers, but actually places the calling 601 events into a running trace buffer to see when and where the 602 events happened, as well as their results. 603 604 Say N if unsure. 605 606config BLK_DEV_IO_TRACE 607 bool "Support for tracing block IO actions" 608 depends on SYSFS 609 depends on BLOCK 610 select RELAY 611 select DEBUG_FS 612 select TRACEPOINTS 613 select GENERIC_TRACER 614 select STACKTRACE 615 help 616 Say Y here if you want to be able to trace the block layer actions 617 on a given queue. Tracing allows you to see any traffic happening 618 on a block device queue. For more information (and the userspace 619 support tools needed), fetch the blktrace tools from: 620 621 git://git.kernel.dk/blktrace.git 622 623 Tracing also is possible using the ftrace interface, e.g.: 624 625 echo 1 > /sys/block/sda/sda1/trace/enable 626 echo blk > /sys/kernel/debug/tracing/current_tracer 627 cat /sys/kernel/debug/tracing/trace_pipe 628 629 If unsure, say N. 630 631config KPROBE_EVENTS 632 depends on KPROBES 633 depends on HAVE_REGS_AND_STACK_ACCESS_API 634 bool "Enable kprobes-based dynamic events" 635 select TRACING 636 select PROBE_EVENTS 637 select DYNAMIC_EVENTS 638 default y 639 help 640 This allows the user to add tracing events (similar to tracepoints) 641 on the fly via the ftrace interface. See 642 Documentation/trace/kprobetrace.rst for more details. 643 644 Those events can be inserted wherever kprobes can probe, and record 645 various register and memory values. 646 647 This option is also required by perf-probe subcommand of perf tools. 648 If you want to use perf tools, this option is strongly recommended. 649 650config KPROBE_EVENTS_ON_NOTRACE 651 bool "Do NOT protect notrace function from kprobe events" 652 depends on KPROBE_EVENTS 653 depends on DYNAMIC_FTRACE 654 default n 655 help 656 This is only for the developers who want to debug ftrace itself 657 using kprobe events. 658 659 If kprobes can use ftrace instead of breakpoint, ftrace related 660 functions are protected from kprobe-events to prevent an infinite 661 recursion or any unexpected execution path which leads to a kernel 662 crash. 663 664 This option disables such protection and allows you to put kprobe 665 events on ftrace functions for debugging ftrace by itself. 666 Note that this might let you shoot yourself in the foot. 667 668 If unsure, say N. 669 670config UPROBE_EVENTS 671 bool "Enable uprobes-based dynamic events" 672 depends on ARCH_SUPPORTS_UPROBES 673 depends on MMU 674 depends on PERF_EVENTS 675 select UPROBES 676 select PROBE_EVENTS 677 select DYNAMIC_EVENTS 678 select TRACING 679 default y 680 help 681 This allows the user to add tracing events on top of userspace 682 dynamic events (similar to tracepoints) on the fly via the trace 683 events interface. Those events can be inserted wherever uprobes 684 can probe, and record various registers. 685 This option is required if you plan to use perf-probe subcommand 686 of perf tools on user space applications. 687 688config BPF_EVENTS 689 depends on BPF_SYSCALL 690 depends on (KPROBE_EVENTS || UPROBE_EVENTS) && PERF_EVENTS 691 bool 692 default y 693 help 694 This allows the user to attach BPF programs to kprobe, uprobe, and 695 tracepoint events. 696 697config DYNAMIC_EVENTS 698 def_bool n 699 700config PROBE_EVENTS 701 def_bool n 702 703config BPF_KPROBE_OVERRIDE 704 bool "Enable BPF programs to override a kprobed function" 705 depends on BPF_EVENTS 706 depends on FUNCTION_ERROR_INJECTION 707 default n 708 help 709 Allows BPF to override the execution of a probed function and 710 set a different return value. This is used for error injection. 711 712config FTRACE_MCOUNT_RECORD 713 def_bool y 714 depends on DYNAMIC_FTRACE 715 depends on HAVE_FTRACE_MCOUNT_RECORD 716 717config FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 718 bool 719 depends on FTRACE_MCOUNT_RECORD 720 721config FTRACE_MCOUNT_USE_CC 722 def_bool y 723 depends on $(cc-option,-mrecord-mcount) 724 depends on !FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 725 depends on FTRACE_MCOUNT_RECORD 726 727config FTRACE_MCOUNT_USE_OBJTOOL 728 def_bool y 729 depends on HAVE_OBJTOOL_MCOUNT 730 depends on !FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 731 depends on !FTRACE_MCOUNT_USE_CC 732 depends on FTRACE_MCOUNT_RECORD 733 select OBJTOOL 734 735config FTRACE_MCOUNT_USE_RECORDMCOUNT 736 def_bool y 737 depends on !FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 738 depends on !FTRACE_MCOUNT_USE_CC 739 depends on !FTRACE_MCOUNT_USE_OBJTOOL 740 depends on FTRACE_MCOUNT_RECORD 741 742config TRACING_MAP 743 bool 744 depends on ARCH_HAVE_NMI_SAFE_CMPXCHG 745 help 746 tracing_map is a special-purpose lock-free map for tracing, 747 separated out as a stand-alone facility in order to allow it 748 to be shared between multiple tracers. It isn't meant to be 749 generally used outside of that context, and is normally 750 selected by tracers that use it. 751 752config SYNTH_EVENTS 753 bool "Synthetic trace events" 754 select TRACING 755 select DYNAMIC_EVENTS 756 default n 757 help 758 Synthetic events are user-defined trace events that can be 759 used to combine data from other trace events or in fact any 760 data source. Synthetic events can be generated indirectly 761 via the trace() action of histogram triggers or directly 762 by way of an in-kernel API. 763 764 See Documentation/trace/events.rst or 765 Documentation/trace/histogram.rst for details and examples. 766 767 If in doubt, say N. 768 769config USER_EVENTS 770 bool "User trace events" 771 select TRACING 772 select DYNAMIC_EVENTS 773 depends on BROKEN || COMPILE_TEST # API needs to be straighten out 774 help 775 User trace events are user-defined trace events that 776 can be used like an existing kernel trace event. User trace 777 events are generated by writing to a tracefs file. User 778 processes can determine if their tracing events should be 779 generated by memory mapping a tracefs file and checking for 780 an associated byte being non-zero. 781 782 If in doubt, say N. 783 784config HIST_TRIGGERS 785 bool "Histogram triggers" 786 depends on ARCH_HAVE_NMI_SAFE_CMPXCHG 787 select TRACING_MAP 788 select TRACING 789 select DYNAMIC_EVENTS 790 select SYNTH_EVENTS 791 default n 792 help 793 Hist triggers allow one or more arbitrary trace event fields 794 to be aggregated into hash tables and dumped to stdout by 795 reading a debugfs/tracefs file. They're useful for 796 gathering quick and dirty (though precise) summaries of 797 event activity as an initial guide for further investigation 798 using more advanced tools. 799 800 Inter-event tracing of quantities such as latencies is also 801 supported using hist triggers under this option. 802 803 See Documentation/trace/histogram.rst. 804 If in doubt, say N. 805 806config TRACE_EVENT_INJECT 807 bool "Trace event injection" 808 depends on TRACING 809 help 810 Allow user-space to inject a specific trace event into the ring 811 buffer. This is mainly used for testing purpose. 812 813 If unsure, say N. 814 815config TRACEPOINT_BENCHMARK 816 bool "Add tracepoint that benchmarks tracepoints" 817 help 818 This option creates the tracepoint "benchmark:benchmark_event". 819 When the tracepoint is enabled, it kicks off a kernel thread that 820 goes into an infinite loop (calling cond_resched() to let other tasks 821 run), and calls the tracepoint. Each iteration will record the time 822 it took to write to the tracepoint and the next iteration that 823 data will be passed to the tracepoint itself. That is, the tracepoint 824 will report the time it took to do the previous tracepoint. 825 The string written to the tracepoint is a static string of 128 bytes 826 to keep the time the same. The initial string is simply a write of 827 "START". The second string records the cold cache time of the first 828 write which is not added to the rest of the calculations. 829 830 As it is a tight loop, it benchmarks as hot cache. That's fine because 831 we care most about hot paths that are probably in cache already. 832 833 An example of the output: 834 835 START 836 first=3672 [COLD CACHED] 837 last=632 first=3672 max=632 min=632 avg=316 std=446 std^2=199712 838 last=278 first=3672 max=632 min=278 avg=303 std=316 std^2=100337 839 last=277 first=3672 max=632 min=277 avg=296 std=258 std^2=67064 840 last=273 first=3672 max=632 min=273 avg=292 std=224 std^2=50411 841 last=273 first=3672 max=632 min=273 avg=288 std=200 std^2=40389 842 last=281 first=3672 max=632 min=273 avg=287 std=183 std^2=33666 843 844 845config RING_BUFFER_BENCHMARK 846 tristate "Ring buffer benchmark stress tester" 847 depends on RING_BUFFER 848 help 849 This option creates a test to stress the ring buffer and benchmark it. 850 It creates its own ring buffer such that it will not interfere with 851 any other users of the ring buffer (such as ftrace). It then creates 852 a producer and consumer that will run for 10 seconds and sleep for 853 10 seconds. Each interval it will print out the number of events 854 it recorded and give a rough estimate of how long each iteration took. 855 856 It does not disable interrupts or raise its priority, so it may be 857 affected by processes that are running. 858 859 If unsure, say N. 860 861config TRACE_EVAL_MAP_FILE 862 bool "Show eval mappings for trace events" 863 depends on TRACING 864 help 865 The "print fmt" of the trace events will show the enum/sizeof names 866 instead of their values. This can cause problems for user space tools 867 that use this string to parse the raw data as user space does not know 868 how to convert the string to its value. 869 870 To fix this, there's a special macro in the kernel that can be used 871 to convert an enum/sizeof into its value. If this macro is used, then 872 the print fmt strings will be converted to their values. 873 874 If something does not get converted properly, this option can be 875 used to show what enums/sizeof the kernel tried to convert. 876 877 This option is for debugging the conversions. A file is created 878 in the tracing directory called "eval_map" that will show the 879 names matched with their values and what trace event system they 880 belong too. 881 882 Normally, the mapping of the strings to values will be freed after 883 boot up or module load. With this option, they will not be freed, as 884 they are needed for the "eval_map" file. Enabling this option will 885 increase the memory footprint of the running kernel. 886 887 If unsure, say N. 888 889config FTRACE_RECORD_RECURSION 890 bool "Record functions that recurse in function tracing" 891 depends on FUNCTION_TRACER 892 help 893 All callbacks that attach to the function tracing have some sort 894 of protection against recursion. Even though the protection exists, 895 it adds overhead. This option will create a file in the tracefs 896 file system called "recursed_functions" that will list the functions 897 that triggered a recursion. 898 899 This will add more overhead to cases that have recursion. 900 901 If unsure, say N 902 903config FTRACE_RECORD_RECURSION_SIZE 904 int "Max number of recursed functions to record" 905 default 128 906 depends on FTRACE_RECORD_RECURSION 907 help 908 This defines the limit of number of functions that can be 909 listed in the "recursed_functions" file, that lists all 910 the functions that caused a recursion to happen. 911 This file can be reset, but the limit can not change in 912 size at runtime. 913 914config RING_BUFFER_RECORD_RECURSION 915 bool "Record functions that recurse in the ring buffer" 916 depends on FTRACE_RECORD_RECURSION 917 # default y, because it is coupled with FTRACE_RECORD_RECURSION 918 default y 919 help 920 The ring buffer has its own internal recursion. Although when 921 recursion happens it wont cause harm because of the protection, 922 but it does cause an unwanted overhead. Enabling this option will 923 place where recursion was detected into the ftrace "recursed_functions" 924 file. 925 926 This will add more overhead to cases that have recursion. 927 928config GCOV_PROFILE_FTRACE 929 bool "Enable GCOV profiling on ftrace subsystem" 930 depends on GCOV_KERNEL 931 help 932 Enable GCOV profiling on ftrace subsystem for checking 933 which functions/lines are tested. 934 935 If unsure, say N. 936 937 Note that on a kernel compiled with this config, ftrace will 938 run significantly slower. 939 940config FTRACE_SELFTEST 941 bool 942 943config FTRACE_STARTUP_TEST 944 bool "Perform a startup test on ftrace" 945 depends on GENERIC_TRACER 946 select FTRACE_SELFTEST 947 help 948 This option performs a series of startup tests on ftrace. On bootup 949 a series of tests are made to verify that the tracer is 950 functioning properly. It will do tests on all the configured 951 tracers of ftrace. 952 953config EVENT_TRACE_STARTUP_TEST 954 bool "Run selftest on trace events" 955 depends on FTRACE_STARTUP_TEST 956 default y 957 help 958 This option performs a test on all trace events in the system. 959 It basically just enables each event and runs some code that 960 will trigger events (not necessarily the event it enables) 961 This may take some time run as there are a lot of events. 962 963config EVENT_TRACE_TEST_SYSCALLS 964 bool "Run selftest on syscall events" 965 depends on EVENT_TRACE_STARTUP_TEST 966 help 967 This option will also enable testing every syscall event. 968 It only enables the event and disables it and runs various loads 969 with the event enabled. This adds a bit more time for kernel boot 970 up since it runs this on every system call defined. 971 972 TBD - enable a way to actually call the syscalls as we test their 973 events 974 975config FTRACE_SORT_STARTUP_TEST 976 bool "Verify compile time sorting of ftrace functions" 977 depends on DYNAMIC_FTRACE 978 depends on BUILDTIME_MCOUNT_SORT 979 help 980 Sorting of the mcount_loc sections that is used to find the 981 where the ftrace knows where to patch functions for tracing 982 and other callbacks is done at compile time. But if the sort 983 is not done correctly, it will cause non-deterministic failures. 984 When this is set, the sorted sections will be verified that they 985 are in deed sorted and will warn if they are not. 986 987 If unsure, say N 988 989config RING_BUFFER_STARTUP_TEST 990 bool "Ring buffer startup self test" 991 depends on RING_BUFFER 992 help 993 Run a simple self test on the ring buffer on boot up. Late in the 994 kernel boot sequence, the test will start that kicks off 995 a thread per cpu. Each thread will write various size events 996 into the ring buffer. Another thread is created to send IPIs 997 to each of the threads, where the IPI handler will also write 998 to the ring buffer, to test/stress the nesting ability. 999 If any anomalies are discovered, a warning will be displayed 1000 and all ring buffers will be disabled. 1001 1002 The test runs for 10 seconds. This will slow your boot time 1003 by at least 10 more seconds. 1004 1005 At the end of the test, statics and more checks are done. 1006 It will output the stats of each per cpu buffer. What 1007 was written, the sizes, what was read, what was lost, and 1008 other similar details. 1009 1010 If unsure, say N 1011 1012config RING_BUFFER_VALIDATE_TIME_DELTAS 1013 bool "Verify ring buffer time stamp deltas" 1014 depends on RING_BUFFER 1015 help 1016 This will audit the time stamps on the ring buffer sub 1017 buffer to make sure that all the time deltas for the 1018 events on a sub buffer matches the current time stamp. 1019 This audit is performed for every event that is not 1020 interrupted, or interrupting another event. A check 1021 is also made when traversing sub buffers to make sure 1022 that all the deltas on the previous sub buffer do not 1023 add up to be greater than the current time stamp. 1024 1025 NOTE: This adds significant overhead to recording of events, 1026 and should only be used to test the logic of the ring buffer. 1027 Do not use it on production systems. 1028 1029 Only say Y if you understand what this does, and you 1030 still want it enabled. Otherwise say N 1031 1032config MMIOTRACE_TEST 1033 tristate "Test module for mmiotrace" 1034 depends on MMIOTRACE && m 1035 help 1036 This is a dumb module for testing mmiotrace. It is very dangerous 1037 as it will write garbage to IO memory starting at a given address. 1038 However, it should be safe to use on e.g. unused portion of VRAM. 1039 1040 Say N, unless you absolutely know what you are doing. 1041 1042config PREEMPTIRQ_DELAY_TEST 1043 tristate "Test module to create a preempt / IRQ disable delay thread to test latency tracers" 1044 depends on m 1045 help 1046 Select this option to build a test module that can help test latency 1047 tracers by executing a preempt or irq disable section with a user 1048 configurable delay. The module busy waits for the duration of the 1049 critical section. 1050 1051 For example, the following invocation generates a burst of three 1052 irq-disabled critical sections for 500us: 1053 modprobe preemptirq_delay_test test_mode=irq delay=500 burst_size=3 1054 1055 What's more, if you want to attach the test on the cpu which the latency 1056 tracer is running on, specify cpu_affinity=cpu_num at the end of the 1057 command. 1058 1059 If unsure, say N 1060 1061config SYNTH_EVENT_GEN_TEST 1062 tristate "Test module for in-kernel synthetic event generation" 1063 depends on SYNTH_EVENTS 1064 help 1065 This option creates a test module to check the base 1066 functionality of in-kernel synthetic event definition and 1067 generation. 1068 1069 To test, insert the module, and then check the trace buffer 1070 for the generated sample events. 1071 1072 If unsure, say N. 1073 1074config KPROBE_EVENT_GEN_TEST 1075 tristate "Test module for in-kernel kprobe event generation" 1076 depends on KPROBE_EVENTS 1077 help 1078 This option creates a test module to check the base 1079 functionality of in-kernel kprobe event definition. 1080 1081 To test, insert the module, and then check the trace buffer 1082 for the generated kprobe events. 1083 1084 If unsure, say N. 1085 1086config HIST_TRIGGERS_DEBUG 1087 bool "Hist trigger debug support" 1088 depends on HIST_TRIGGERS 1089 help 1090 Add "hist_debug" file for each event, which when read will 1091 dump out a bunch of internal details about the hist triggers 1092 defined on that event. 1093 1094 The hist_debug file serves a couple of purposes: 1095 1096 - Helps developers verify that nothing is broken. 1097 1098 - Provides educational information to support the details 1099 of the hist trigger internals as described by 1100 Documentation/trace/histogram-design.rst. 1101 1102 The hist_debug output only covers the data structures 1103 related to the histogram definitions themselves and doesn't 1104 display the internals of map buckets or variable values of 1105 running histograms. 1106 1107 If unsure, say N. 1108 1109endif # FTRACE 1110