* Questions About Virtio-GPU Patchset for Cloud Hypervisor
@ 2024-08-20 14:48 projectmikey
2024-08-25 11:02 ` Alyssa Ross
0 siblings, 1 reply; 2+ messages in thread
From: projectmikey @ 2024-08-20 14:48 UTC (permalink / raw)
To: discuss@spectrum-os.org
[-- Attachment #1: Type: text/plain, Size: 2240 bytes --]
Dear Spectrum Team,
I hope this email finds you well.
I am reaching out with a question about your patchset for Cloud Hypervisor with support for virtio-gpu [https://spectrum-os.org/software/cloud-hypervisor/]. First, I want to say thanks for the work that has been done — it is much appreciated!
I have successfully implemented the latest version of your patchset in my current environment. I am now curious if it can be used with multiple L2 guests, each securely utilizing different GPUs, running concurrently on an L1, and requiring that each L2 guest's resources be kept private and isolated from the others.
To provide some more context, I am currently trying to achieve GPU acceleration within a nested L2 VM on GCP (L1: KVM on GCP => L2: Cloud Hypervisor). I'm using GCP rather than a bare metal environment because GCP supports nested virtualization on their affordable N1 and G2 series VMs.
Since I have limited access to the GCP environment, specifically to the L0 hypervisor and L1 hypervisor layers, I am unable to modify or access BIOS settings or certain underlying configurations at those levels. I am uncertain whether my attempts to configure the environment was entirely correct. However, after extensive online research, I couldn’t find a definitive answer on whether using VFIO is possible in GCP VMs. Despite my efforts, I have not been able to bind to vfio-pci without first enabling No-IOMMU mode on the system.
If secure VFIO usage cannot be achieved, I'm open to exploring alternatives like virtio or vfio-user, provided they can securely allocate GPU access within the L2s without memory or resource sharing between the VMs or other potential security issues that I'm not yet aware of.
Do you know if this is possible with the current version of your patchset? If not, do you have any suggestions on how to achieve this in a nested setup like this one [https://cloud.google.com/compute/docs/instances/nested-virtualization/overview]? Any other insights you could share that might point me in the right direction for accomplishing this securely would be incredibly helpful, as my knowledge in this area is limited.
Thanks again.
-Mike Calendo
Sent with [Proton Mail](https://proton.me/) secure email.
[-- Attachment #2: Type: text/html, Size: 3391 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: Questions About Virtio-GPU Patchset for Cloud Hypervisor
2024-08-20 14:48 Questions About Virtio-GPU Patchset for Cloud Hypervisor projectmikey
@ 2024-08-25 11:02 ` Alyssa Ross
0 siblings, 0 replies; 2+ messages in thread
From: Alyssa Ross @ 2024-08-25 11:02 UTC (permalink / raw)
To: projectmikey; +Cc: discuss
[-- Attachment #1: Type: text/plain, Size: 4095 bytes --]
projectmikey <projectmikey@proton.me> writes:
> I have successfully implemented the latest version of your patchset in
> my current environment. I am now curious if it can be used with
> multiple L2 guests, each securely utilizing different GPUs, running
> concurrently on an L1, and requiring that each L2 guest's resources be
> kept private and isolated from the others.
>
> To provide some more context, I am currently trying to achieve GPU
> acceleration within a nested L2 VM on GCP (L1: KVM on GCP => L2: Cloud
> Hypervisor). I'm using GCP rather than a bare metal environment
> because GCP supports nested virtualization on their affordable N1 and
> G2 series VMs.
>
> Since I have limited access to the GCP environment, specifically to
> the L0 hypervisor and L1 hypervisor layers, I am unable to modify or
> access BIOS settings or certain underlying configurations at those
> levels. I am uncertain whether my attempts to configure the
> environment was entirely correct. However, after extensive online
> research, I couldn’t find a definitive answer on whether using VFIO is
> possible in GCP VMs. Despite my efforts, I have not been able to bind
> to vfio-pci without first enabling No-IOMMU mode on the system.
>
> If secure VFIO usage cannot be achieved, I'm open to exploring
> alternatives like virtio or vfio-user, provided they can securely
> allocate GPU access within the L2s without memory or resource sharing
> between the VMs or other potential security issues that I'm not yet
> aware of.
>
> Do you know if this is possible with the current version of your
> patchset? If not, do you have any suggestions on how to achieve this
> in a nested setup like this one
> [https://cloud.google.com/compute/docs/instances/nested-virtualization/overview]?
> Any other insights you could share that might point me in the right
> direction for accomplishing this securely would be incredibly helpful,
> as my knowledge in this area is limited.
The crosvm virtio-gpu device, as used by Spectrum's Cloud Hyperviser
patchset, gives you a couple of different options for this:
• A paravirtualized virtio-gpu device is presented to the guest, backed
by a GPU device on the host — the "virgl"/"virgl2" (OpenGL) and
"venus" (Vulkan) features. It's not clear to me what security
properties this is designed to have — whether it's designed to
protect against VM escapes, for example, so it would be important to
find that out before using it. Security properties aside, you should
be able to implement what you want using this. Usually the
paravirtualized GPUs would all share a single host GPU, but to have
each paravirtualized GPU backed by a different physical GPU, you
could point the crosvm process for each guest to a different GPU. I
don't know how to do this, but I'm confident it's possible.
• virtio-gpu native contexts (the "drm" feature) passes through the
Linux DRM uAPI, allowing guests to interact with the GPU in the same
way that a host application could, without an intermediary
paravirtualized device. This means you inherit the security
properties of normal application GPU access on Linux. Once again,
this would normally share a single GPU between multiple guests, but
there's no reason you have to do it this way. Native contexts are
very new, and have to be implemented for each GPU driver, so there's
a good chance it won't be implemented yet for whatever GPUs you're
using.
The ideal way to do what you want would not require virtio-gpu at all.
It would be to have your L1 either have a whole GPU attached, or some
individial VFs. If it was a whole GPU, and you wanted to, you could
then partition it. (Whether that's secure of course depends on how good
a job the GPU vendor has done.) Then GPUs or VFs could be through to
guests. If VFIO is not working inside your guest without noiommu, the
best thing to do would be to get your L1 to have an IOMMU device, but I
understand that GCP might not offer that.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2024-08-25 11:02 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-20 14:48 Questions About Virtio-GPU Patchset for Cloud Hypervisor projectmikey
2024-08-25 11:02 ` Alyssa Ross
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).