From: Alyssa Ross <hi@alyssa.is>
To: projectmikey <projectmikey@proton.me>
Cc: discuss@spectrum-os.org
Subject: Re: Questions About Virtio-GPU Patchset for Cloud Hypervisor
Date: Sun, 25 Aug 2024 13:02:18 +0200 [thread overview]
Message-ID: <87bk1gdfwl.fsf@alyssa.is> (raw)
In-Reply-To: <8d6dfbgv0soRgcAY-w0_7dUYnfhtYt8kWoGIBSwT0Z82Zmvgd5Fww2c54a5gOWD5mo6dHKMpvZeKJAKJ41kSXxVvdq7l0yVauo-Mk3KkPBo=@proton.me>
[-- Attachment #1: Type: text/plain, Size: 4095 bytes --]
projectmikey <projectmikey@proton.me> writes:
> I have successfully implemented the latest version of your patchset in
> my current environment. I am now curious if it can be used with
> multiple L2 guests, each securely utilizing different GPUs, running
> concurrently on an L1, and requiring that each L2 guest's resources be
> kept private and isolated from the others.
>
> To provide some more context, I am currently trying to achieve GPU
> acceleration within a nested L2 VM on GCP (L1: KVM on GCP => L2: Cloud
> Hypervisor). I'm using GCP rather than a bare metal environment
> because GCP supports nested virtualization on their affordable N1 and
> G2 series VMs.
>
> Since I have limited access to the GCP environment, specifically to
> the L0 hypervisor and L1 hypervisor layers, I am unable to modify or
> access BIOS settings or certain underlying configurations at those
> levels. I am uncertain whether my attempts to configure the
> environment was entirely correct. However, after extensive online
> research, I couldn’t find a definitive answer on whether using VFIO is
> possible in GCP VMs. Despite my efforts, I have not been able to bind
> to vfio-pci without first enabling No-IOMMU mode on the system.
>
> If secure VFIO usage cannot be achieved, I'm open to exploring
> alternatives like virtio or vfio-user, provided they can securely
> allocate GPU access within the L2s without memory or resource sharing
> between the VMs or other potential security issues that I'm not yet
> aware of.
>
> Do you know if this is possible with the current version of your
> patchset? If not, do you have any suggestions on how to achieve this
> in a nested setup like this one
> [https://cloud.google.com/compute/docs/instances/nested-virtualization/overview]?
> Any other insights you could share that might point me in the right
> direction for accomplishing this securely would be incredibly helpful,
> as my knowledge in this area is limited.
The crosvm virtio-gpu device, as used by Spectrum's Cloud Hyperviser
patchset, gives you a couple of different options for this:
• A paravirtualized virtio-gpu device is presented to the guest, backed
by a GPU device on the host — the "virgl"/"virgl2" (OpenGL) and
"venus" (Vulkan) features. It's not clear to me what security
properties this is designed to have — whether it's designed to
protect against VM escapes, for example, so it would be important to
find that out before using it. Security properties aside, you should
be able to implement what you want using this. Usually the
paravirtualized GPUs would all share a single host GPU, but to have
each paravirtualized GPU backed by a different physical GPU, you
could point the crosvm process for each guest to a different GPU. I
don't know how to do this, but I'm confident it's possible.
• virtio-gpu native contexts (the "drm" feature) passes through the
Linux DRM uAPI, allowing guests to interact with the GPU in the same
way that a host application could, without an intermediary
paravirtualized device. This means you inherit the security
properties of normal application GPU access on Linux. Once again,
this would normally share a single GPU between multiple guests, but
there's no reason you have to do it this way. Native contexts are
very new, and have to be implemented for each GPU driver, so there's
a good chance it won't be implemented yet for whatever GPUs you're
using.
The ideal way to do what you want would not require virtio-gpu at all.
It would be to have your L1 either have a whole GPU attached, or some
individial VFs. If it was a whole GPU, and you wanted to, you could
then partition it. (Whether that's secure of course depends on how good
a job the GPU vendor has done.) Then GPUs or VFs could be through to
guests. If VFIO is not working inside your guest without noiommu, the
best thing to do would be to get your L1 to have an IOMMU device, but I
understand that GCP might not offer that.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]
prev parent reply other threads:[~2024-08-25 11:02 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-20 14:48 Questions About Virtio-GPU Patchset for Cloud Hypervisor projectmikey
2024-08-25 11:02 ` Alyssa Ross [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87bk1gdfwl.fsf@alyssa.is \
--to=hi@alyssa.is \
--cc=discuss@spectrum-os.org \
--cc=projectmikey@proton.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).