Home
last modified time | relevance | path

Searched refs:guest (Results 1 – 25 of 265) sorted by relevance

1234567891011

/linux-6.6.21/tools/virtio/ringtest/
Dvirtio_ring_0_9.c41 struct guest { struct
52 } guest; argument
78 guest.avail_idx = 0; in alloc_ring()
79 guest.kicked_avail_idx = -1; in alloc_ring()
80 guest.last_used_idx = 0; in alloc_ring()
83 guest.free_head = 0; in alloc_ring()
89 guest.num_free = ring_size; in alloc_ring()
107 if (!guest.num_free) in add_inbuf()
111 head = (ring_size - 1) & (guest.avail_idx++); in add_inbuf()
113 head = guest.free_head; in add_inbuf()
[all …]
Dring.c59 struct guest { struct
65 } guest; variable
92 guest.avail_idx = 0; in alloc_ring()
93 guest.kicked_avail_idx = -1; in alloc_ring()
94 guest.last_used_idx = 0; in alloc_ring()
103 guest.num_free = ring_size; in alloc_ring()
116 if (!guest.num_free) in add_inbuf()
119 guest.num_free--; in add_inbuf()
120 head = (ring_size - 1) & (guest.avail_idx++); in add_inbuf()
145 unsigned head = (ring_size - 1) & guest.last_used_idx; in get_buf()
[all …]
/linux-6.6.21/drivers/misc/cxl/
Dof.c88 afu->guest->handle = addr; in read_phys_addr()
91 afu->guest->p2n_phys += addr; in read_phys_addr()
92 afu->guest->p2n_size = size; in read_phys_addr()
133 if (read_handle(afu_np, &afu->guest->handle)) in cxl_of_read_afu_handle()
135 pr_devel("AFU handle: 0x%.16llx\n", afu->guest->handle); in cxl_of_read_afu_handle()
190 read_prop_dword(np, "ibm,max-ints-per-process", &afu->guest->max_ints); in cxl_of_read_afu_properties()
191 afu->irqs_max = afu->guest->max_ints; in cxl_of_read_afu_properties()
269 pr_devel("AFU handle: %#llx\n", afu->guest->handle); in cxl_of_read_afu_properties()
271 afu->guest->p2n_phys, afu->guest->p2n_size); in cxl_of_read_afu_properties()
301 adapter->guest->irq_avail = kcalloc(nranges, sizeof(struct irq_avail), in read_adapter_irq_config()
[all …]
Dguest.c117 rc = cxl_h_collect_vpd_adapter(adapter->guest->handle, in guest_collect_vpd()
120 rc = cxl_h_collect_vpd(afu->guest->handle, 0, in guest_collect_vpd()
158 return cxl_h_collect_int_info(ctx->afu->guest->handle, ctx->process_token, info); in guest_get_irq_info()
186 rc = cxl_h_read_error_state(afu->guest->handle, &state); in afu_read_error_state()
203 rc = cxl_h_get_fn_error_interrupt(afu->guest->handle, &serr); in guest_slice_irq_err()
214 rc = cxl_h_ack_fn_error_interrupt(afu->guest->handle, serr); in guest_slice_irq_err()
228 for (i = 0; i < adapter->guest->irq_nranges; i++) { in irq_alloc_range()
229 cur = &adapter->guest->irq_avail[i]; in irq_alloc_range()
252 for (i = 0; i < adapter->guest->irq_nranges; i++) { in irq_free_range()
253 cur = &adapter->guest->irq_avail[i]; in irq_free_range()
[all …]
/linux-6.6.21/arch/mips/include/asm/
Dcpu-features.h666 #define cpu_guest_has_conf1 (cpu_data[0].guest.conf & (1 << 1))
669 #define cpu_guest_has_conf2 (cpu_data[0].guest.conf & (1 << 2))
672 #define cpu_guest_has_conf3 (cpu_data[0].guest.conf & (1 << 3))
675 #define cpu_guest_has_conf4 (cpu_data[0].guest.conf & (1 << 4))
678 #define cpu_guest_has_conf5 (cpu_data[0].guest.conf & (1 << 5))
681 #define cpu_guest_has_conf6 (cpu_data[0].guest.conf & (1 << 6))
684 #define cpu_guest_has_conf7 (cpu_data[0].guest.conf & (1 << 7))
687 #define cpu_guest_has_fpu (cpu_data[0].guest.options & MIPS_CPU_FPU)
690 #define cpu_guest_has_watch (cpu_data[0].guest.options & MIPS_CPU_WATCH)
693 #define cpu_guest_has_contextconfig (cpu_data[0].guest.options & MIPS_CPU_CTXTC)
[all …]
/linux-6.6.21/Documentation/virt/kvm/x86/
Drunning-nested-guests.rst7 A nested guest is the ability to run a guest inside another guest (it
9 example is a KVM guest that in turn runs on a KVM guest (the rest of
33 - L1 – level-1 guest; a VM running on L0; also called the "guest
36 - L2 – level-2 guest; a VM running on L1, this is the "nested guest"
46 (guest hypervisor), L3 (nested guest).
61 Provider, using nested KVM lets you rent a large enough "guest
62 hypervisor" (level-1 guest). This in turn allows you to create
66 - Live migration of "guest hypervisors" and their nested guests, for
139 .. note:: If you suspect your L2 (i.e. nested guest) is running slower,
144 Starting a nested guest (x86)
[all …]
Dmmu.rst8 for presenting a standard x86 mmu to the guest, while translating guest
14 the guest should not be able to determine that it is running
19 the guest must not be able to touch host memory not assigned
28 Linux memory management code must be in control of guest memory
32 report writes to guest memory to enable live migration
47 gfn guest frame number
48 gpa guest physical address
49 gva guest virtual address
50 ngpa nested guest physical address
51 ngva nested guest virtual address
[all …]
Damd-memory-encryption.rst52 The SEV guest key management is handled by a separate processor called the AMD
55 encrypting bootstrap code, snapshot, migrating and debugging the guest. For more
101 context. To create the encryption context, user must provide a guest policy,
112 __u32 policy; /* guest's policy */
114 … __u64 dh_uaddr; /* userspace address pointing to the guest owner's PDH key */
117 … __u64 session_addr; /* userspace address which points to the guest session information */
132 of the memory contents that can be sent to the guest owner as an attestation
152 data encrypted by the KVM_SEV_LAUNCH_UPDATE_DATA command. The guest owner may
153 wait to provide the guest with confidential information until it can verify the
154 measurement. Since the guest owner knows the initial contents of the guest at
[all …]
Dcpuid.rst9 A guest running on a kvm host, can check some of its features using
12 a guest.
65 KVM_FEATURE_PV_UNHALT 7 guest checks this feature bit
69 KVM_FEATURE_PV_TLB_FLUSH 9 guest checks this feature bit
77 KVM_FEATURE_PV_SEND_IPI 11 guest checks this feature bit
85 KVM_FEATURE_PV_SCHED_YIELD 13 guest checks this feature bit
89 KVM_FEATURE_ASYNC_PF_INT 14 guest checks this feature bit
95 KVM_FEATURE_MSI_EXT_DEST_ID 15 guest checks this feature bit
99 KVM_FEATURE_HC_MAP_GPA_RANGE 16 guest checks this feature bit before
103 KVM_FEATURE_MIGRATION_CONTROL 17 guest checks this feature bit before
[all …]
Dhypercalls.rst54 :Purpose: Trigger guest exit so that the host can check for pending
70 :Purpose: Expose hypercall availability to the guest. On x86 platforms, cpuid
81 :Purpose: To enable communication between the hypervisor and guest there is a
83 The guest can map this shared page to access its supervisor register
93 A vcpu of a paravirtualized guest that is busywaiting in guest
98 same guest can wakeup the sleeping vcpu by issuing KVM_HC_KICK_CPU hypercall,
107 :Purpose: Hypercall used to synchronize host and guest clocks.
111 a0: guest physical address where host copies
130 * tsc: guest TSC value used to calculate sec/nsec pair
133 The hypercall lets a guest compute a precise timestamp across
[all …]
Dmsr.rst25 in guest RAM. This memory is expected to hold a copy of the following
40 guest has to check version before and after grabbing
64 guest RAM, plus an enable bit in bit 0. This memory is expected to hold
87 guest has to check version before and after grabbing
127 coordinated between the guest and the hypervisor. Availability
139 | | | guest vcpu has been paused by |
196 which must be in guest RAM and must be zeroed. This memory is expected
221 a token that will be used to notify the guest when missing page becomes
225 is currently supported, when set, it indicates that the guest is dealing
232 as regular page fault, guest must reset 'flags' to '0' before it does
[all …]
/linux-6.6.21/Documentation/arch/x86/
Dtdx.rst7 Intel's Trust Domain Extensions (TDX) protect confidential guest VMs from
8 the host and physical attacks by isolating the guest register state and by
9 encrypting the guest memory. In TDX, a special module running in a special
10 mode sits between the host and the guest and manages the guest/host
13 Since the host cannot directly access guest registers or memory, much
14 normal functionality of a hypervisor must be moved into the guest. This is
16 guest kernel. A #VE is handled entirely inside the guest kernel, but some
20 guest to the hypervisor or the TDX module.
64 indicates a bug in the guest. The guest may try to handle the #GP with a
70 The "just works" MSRs do not need any special guest handling. They might
[all …]
/linux-6.6.21/tools/perf/Documentation/
Dguest-files.txt4 Guest OS /proc/kallsyms file copy. perf reads it to get guest
5 kernel symbols. Users copy it out from guest OS.
8 Guest OS /proc/modules file copy. perf reads it to get guest
9 kernel module information. Users copy it out from guest OS.
14 --guest-code::
15 Indicate that guest code can be found in the hypervisor process,
Dperf-kvm.txt6 perf-kvm - Tool to trace/measure kvm guest os
11 'perf kvm' [--host] [--guest] [--guestmount=<path>
14 'perf kvm' [--host] [--guest] [--guestkallsyms=<path> --guestmodules=<path>
23 a performance counter profile of guest os in realtime
28 default behavior of perf kvm as --guest, so if neither --host nor --guest
29 is input, the perf data file name is perf.data.guest. If --host is input,
31 perf.data.host, please input --host --no-guest. The behaviors are shown as
33 Default('') -> perf.data.guest
35 --guest -> perf.data.guest
36 --host --guest -> perf.data.kvm
[all …]
/linux-6.6.21/Documentation/ABI/testing/
Dsysfs-hypervisor-xen6 Type of guest:
7 "Xen": standard guest type on arm
8 "HVM": fully virtualized guest (x86)
9 "PV": paravirtualized guest (x86)
10 "PVH": fully virtualized guest without legacy emulation (x86)
22 "self" The guest can profile itself
23 "hv" The guest can profile itself and, if it is
25 "all" The guest can profile itself, the hypervisor
/linux-6.6.21/Documentation/virt/hyperv/
Dvmbus.rst5 VMbus is a software construct provided by Hyper-V to guest VMs. It
7 devices that Hyper-V presents to guest VMs. The control path is
8 used to offer synthetic devices to the guest VM and, in some cases,
10 channels for communicating between the device driver in the guest VM
12 signaling primitives to allow Hyper-V and the guest to interrupt
16 entry in a running Linux guest. The VMbus driver (drivers/hv/vmbus_drv.c)
47 the device in the guest VM. For example, the Linux driver for the
65 guest, and the "out" ring buffer is for messages from the guest to
67 viewed by the guest side. The ring buffers are memory that is
68 shared between the guest and the host, and they follow the standard
[all …]
Doverview.rst6 enlightened guest on Microsoft's Hyper-V hypervisor. Hyper-V
24 some guest actions trap to Hyper-V. Hyper-V emulates the action and
25 returns control to the guest. This behavior is generally invisible
31 processor registers or in memory shared between the Linux guest and
38 the guest, and the Linux kernel can read or write these MSRs using
45 the Hyper-V host and the Linux guest. It uses memory that is shared
46 between Hyper-V and the guest, along with various signaling
70 * Linux tells Hyper-V the guest physical address (GPA) of the
73 GPAs, which usually do not need to be contiguous in the guest
87 range of 4 Kbytes. Since the Linux guest page size on x86/x64 is
[all …]
/linux-6.6.21/Documentation/virt/kvm/s390/
Ds390-pv.rst10 access VM state like guest memory or guest registers. Instead, the
15 Each guest starts in non-protected mode and then may make a request to
16 transition into protected mode. On transition, KVM registers the guest
20 The Ultravisor will secure and decrypt the guest's boot memory
22 starts/stops and injected interrupts while the guest is running.
24 As access to the guest's state, such as the SIE state description, is
29 reduce exposed guest state.
40 field (offset 0x54). If the guest cpu is not enabled for the interrupt
50 access to the guest memory.
84 instruction text, in order not to leak guest instruction text.
[all …]
/linux-6.6.21/Documentation/arch/s390/
Dvfio-ap.rst122 Let's now take a look at how AP instructions executed on a guest are interpreted
128 control domains assigned to the KVM guest:
131 to the KVM guest. Each bit in the mask, from left to right, corresponds to
133 use by the KVM guest.
136 assigned to the KVM guest. Each bit in the mask, from left to right,
138 corresponding queue is valid for use by the KVM guest.
141 assigned to the KVM guest. The ADM bit mask controls which domains can be
143 guest. Each bit in the mask, from left to right, corresponds to a domain from
153 adapters 1 and 2 and usage domains 5 and 6 are assigned to a guest, the APQNs
154 (1,5), (1,6), (2,5) and (2,6) will be valid for the guest.
[all …]
/linux-6.6.21/Documentation/arch/arm64/
Dperf.rst34 For the guest this attribute will exclude EL1. Please note that EL2 is
35 never counted within a guest.
48 guest/host transitions.
50 For the guest this attribute has no effect. Please note that EL2 is
51 never counted within a guest.
57 These attributes exclude the KVM host and guest, respectively.
62 The KVM guest may run at EL0 (userspace) and EL1 (kernel).
66 must enable/disable counting on the entry and exit to the guest. This is
70 exiting the guest we disable/enable the event as appropriate based on the
74 for exclude_host. Upon entering and exiting the guest we modify the event
[all …]
/linux-6.6.21/arch/x86/xen/
DKconfig7 bool "Xen guest support"
20 bool "Xen PV guest support"
29 Support running as a Xen PV guest.
61 bool "Xen PVHVM guest support"
65 Support running as a Xen PVHVM guest.
81 bool "Xen PVH guest support"
86 Support for running as a Xen PVH guest.
95 Support running as a Xen Dom0 guest.
/linux-6.6.21/tools/virtio/virtio-trace/
DREADME4 Trace agent is a user tool for sending trace data of a guest to a Host in low
48 For example, if a guest use three CPUs, the names are
83 example, if a guest use three CPUs, chardev names should be trace-path-cpu0,
86 3) Boot the guest
87 You can find some chardev in /dev/virtio-ports/ in the guest.
93 0) Build trace agent in a guest
96 1) Enable ftrace in the guest
100 2) Run trace agent in the guest
104 option, trace data are output via stdout in the guest.
109 the guest will stop by specification of chardev in QEMU. This blocking mode may
[all …]
/linux-6.6.21/Documentation/virt/kvm/
Dvcpu-requests.rst48 The goal of a VCPU kick is to bring a VCPU thread out of guest mode in
50 a guest mode exit. However, a VCPU thread may not be in guest mode at the
55 1) Send an IPI. This forces a guest mode exit.
56 2) Waking a sleeping VCPU. Sleeping VCPUs are VCPU threads outside guest
60 3) Nothing. When the VCPU is not in guest mode and the VCPU thread is not
67 guest is running in guest mode or not, as well as some specific
68 outside guest mode states. The architecture may use ``vcpu->mode`` to
76 The VCPU thread is outside guest mode.
80 The VCPU thread is in guest mode.
89 The VCPU thread is outside guest mode, but it wants the sender of
[all …]
/linux-6.6.21/Documentation/ABI/stable/
Dsysfs-hypervisor-xen33 Space separated list of supported guest system types. Each type
40 <major>: major guest interface version
41 <minor>: minor guest interface version
43 "x86_32": 32 bit x86 guest without PAE
44 "x86_32p": 32 bit x86 guest with PAE
45 "x86_64": 64 bit x86 guest
46 "armv7l": 32 bit arm guest
47 "aarch64": 64 bit arm guest
64 Features the Xen hypervisor supports for the guest as defined
96 UUID of the guest as known to the Xen hypervisor.
/linux-6.6.21/tools/testing/vsock/
DREADME3 These tests exercise net/vmw_vsock/ host<->guest sockets for VMware, KVM, and
16 3. Install the kernel and tests inside the guest.
17 4. Boot the guest and ensure that the AF_VSOCK transport is enabled.
21 # host=server, guest=client
25 (guest)# $TEST_BINARY --mode=client \
30 # host=client, guest=server
31 (guest)# $TEST_BINARY --mode=server \

1234567891011