From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from atuin.qyliss.net (localhost [IPv6:::1]) by atuin.qyliss.net (Postfix) with ESMTP id 6D3231069A; Sun, 25 Aug 2024 11:02:44 +0000 (UTC) Received: by atuin.qyliss.net (Postfix, from userid 993) id 32A8E10640; Sun, 25 Aug 2024 11:02:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 4.0.1 (2024-03-26) on atuin.qyliss.net X-Spam-Level: X-Spam-Status: No, score=-0.7 required=5.0 tests=DMARC_MISSING, RCVD_IN_DNSWL_LOW,SPF_HELO_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=4.0.1 Received: from fhigh1-smtp.messagingengine.com (fhigh1-smtp.messagingengine.com [103.168.172.152]) by atuin.qyliss.net (Postfix) with ESMTPS id 9D087105D8 for ; Sun, 25 Aug 2024 11:02:33 +0000 (UTC) Received: from phl-compute-07.internal (phl-compute-07.nyi.internal [10.202.2.47]) by mailfhigh.nyi.internal (Postfix) with ESMTP id D90D21151AE9; Sun, 25 Aug 2024 07:02:31 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-07.internal (MEProxy); Sun, 25 Aug 2024 07:02:31 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alyssa.is; h=cc :cc:content-type:content-type:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:subject :subject:to:to; s=fm3; t=1724583751; x=1724670151; bh=aDJXmDQ/Ot RPClNFz7mO38c6DuLOKKs5a+yDDTz5m5E=; b=blYmegl++d0OSKlIPPnu57IN2A pkATgxF+ZSqk2+vRuJzMSlFtcXHGFrUumCnnnVt/voGH9WIljqPIaJRf6V+I5Pn8 zFgLa3nDBY9QhnvnCJMGTupFtragebu4t1SLtpikVVJtx6H+YzPQCCmAnruLhz76 pDCJVmFbkf7zNhQ37XHF2k1OuO7Yyqn4PivHVTZ0Ip2j0M4r8HIxCr3geKkVUH0a n92lOpi1sQP0dh5jOODOji2M6yYMHnbIVm47oD/DVcjh3w+vpFfqPjtVoNk8sviS +vqjsbj/pdp17DxHti5pzd1FMBuGSD8hHRvBTuj/Tl/zI+HEXvaGvfhDuQmA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-type:content-type:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:subject:subject:to :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; t=1724583751; x=1724670151; bh=aDJXmDQ/OtRPClNFz7mO38c6DuLO KKs5a+yDDTz5m5E=; b=ck8NakASD/D4+5SKKnpEVe8IOWH8i3z4bzuHSrb4h67s DC3z8QQqXAQf9HaIM3TBrbpRGHVlXIrwB2WzgHCYq0SN6y+m4nHHeqBcN4135GGg bM81qUdWxpmUniHsweu7uN5nma7AKMrfRLkmPHRuJpdRcw7qzxuxcvTL79bn23my wRVETLUqaaIfoaeKgo3pTY9+uGACFeF9hmTwZh4jT4CG9T5rI1o7yrHlylHvZ9iv FByuFxskBVXIHoBrDFw+X2gKk/RhPpmKJF9WzQsNQP+bhIZ0y2e5rbvuTQGBJyKb YeSz8l9xfjuV7Q/6a1yDudtVw/TeYyIfqgJ4fO0C1A== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddruddviedgfeegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucenucfjughrpefhvfevuf gjfhffkfggtgesghdtreertddtjeenucfhrhhomheptehlhihsshgrucftohhsshcuoehh ihesrghlhihsshgrrdhisheqnecuggftrfgrthhtvghrnhepvddtveefkeefgfdukeeule duhfdtgfeuhfekgeevveefffffvedtvddtfffhfedunecuffhomhgrihhnpehgohhoghhl vgdrtghomhenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhroh hmpehhihesrghlhihsshgrrdhishdpnhgspghrtghpthhtohepvddpmhhouggvpehsmhht phhouhhtpdhrtghpthhtohepphhrohhjvggtthhmihhkvgihsehprhhothhonhdrmhgvpd hrtghpthhtohepughishgtuhhsshesshhpvggtthhruhhmqdhoshdrohhrgh X-ME-Proxy: Feedback-ID: i12284293:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun, 25 Aug 2024 07:02:31 -0400 (EDT) Received: by mbp.qyliss.net (Postfix, from userid 1000) id 78E8157D8; Sun, 25 Aug 2024 13:02:29 +0200 (CEST) From: Alyssa Ross To: projectmikey Subject: Re: Questions About Virtio-GPU Patchset for Cloud Hypervisor In-Reply-To: <8d6dfbgv0soRgcAY-w0_7dUYnfhtYt8kWoGIBSwT0Z82Zmvgd5Fww2c54a5gOWD5mo6dHKMpvZeKJAKJ41kSXxVvdq7l0yVauo-Mk3KkPBo=@proton.me> References: <8d6dfbgv0soRgcAY-w0_7dUYnfhtYt8kWoGIBSwT0Z82Zmvgd5Fww2c54a5gOWD5mo6dHKMpvZeKJAKJ41kSXxVvdq7l0yVauo-Mk3KkPBo=@proton.me> Date: Sun, 25 Aug 2024 13:02:18 +0200 Message-ID: <87bk1gdfwl.fsf@alyssa.is> MIME-Version: 1.0 Content-Type: multipart/signed; boundary="=-=-="; micalg=pgp-sha256; protocol="application/pgp-signature" Message-ID-Hash: VLG23WZKHKQAN2DNAWBQIP5PIKR3QON5 X-Message-ID-Hash: VLG23WZKHKQAN2DNAWBQIP5PIKR3QON5 X-MailFrom: hi@alyssa.is X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; header-match-discuss.spectrum-os.org-0; header-match-discuss.spectrum-os.org-1; header-match-discuss.spectrum-os.org-2; header-match-discuss.spectrum-os.org-3; header-match-discuss.spectrum-os.org-4; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: discuss@spectrum-os.org X-Mailman-Version: 3.3.9 Precedence: list List-Id: General high-level discussion about Spectrum Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: --=-=-= Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable projectmikey writes: > I have successfully implemented the latest version of your patchset in > my current environment. I am now curious if it can be used with > multiple L2 guests, each securely utilizing different GPUs, running > concurrently on an L1, and requiring that each L2 guest's resources be > kept private and isolated from the others. > > To provide some more context, I am currently trying to achieve GPU > acceleration within a nested L2 VM on GCP (L1: KVM on GCP =3D> L2: Cloud > Hypervisor). I'm using GCP rather than a bare metal environment > because GCP supports nested virtualization on their affordable N1 and > G2 series VMs. > > Since I have limited access to the GCP environment, specifically to > the L0 hypervisor and L1 hypervisor layers, I am unable to modify or > access BIOS settings or certain underlying configurations at those > levels. I am uncertain whether my attempts to configure the > environment was entirely correct. However, after extensive online > research, I couldn=E2=80=99t find a definitive answer on whether using VF= IO is > possible in GCP VMs. Despite my efforts, I have not been able to bind > to vfio-pci without first enabling No-IOMMU mode on the system. > > If secure VFIO usage cannot be achieved, I'm open to exploring > alternatives like virtio or vfio-user, provided they can securely > allocate GPU access within the L2s without memory or resource sharing > between the VMs or other potential security issues that I'm not yet > aware of. > > Do you know if this is possible with the current version of your > patchset? If not, do you have any suggestions on how to achieve this > in a nested setup like this one > [https://cloud.google.com/compute/docs/instances/nested-virtualization/ov= erview]? > Any other insights you could share that might point me in the right > direction for accomplishing this securely would be incredibly helpful, > as my knowledge in this area is limited. The crosvm virtio-gpu device, as used by Spectrum's Cloud Hyperviser patchset, gives you a couple of different options for this: =E2=80=A2 A paravirtualized virtio-gpu device is presented to the guest, b= acked by a GPU device on the host =E2=80=94 the "virgl"/"virgl2" (OpenGL) and "venus" (Vulkan) features. It's not clear to me what security properties this is designed to have =E2=80=94 whether it's designed to protect against VM escapes, for example, so it would be important to find that out before using it. Security properties aside, you should be able to implement what you want using this. Usually the paravirtualized GPUs would all share a single host GPU, but to have each paravirtualized GPU backed by a different physical GPU, you could point the crosvm process for each guest to a different GPU. I don't know how to do this, but I'm confident it's possible. =E2=80=A2 virtio-gpu native contexts (the "drm" feature) passes through the Linux DRM uAPI, allowing guests to interact with the GPU in the same way that a host application could, without an intermediary paravirtualized device. This means you inherit the security properties of normal application GPU access on Linux. Once again, this would normally share a single GPU between multiple guests, but there's no reason you have to do it this way. Native contexts are very new, and have to be implemented for each GPU driver, so there's a good chance it won't be implemented yet for whatever GPUs you're using. The ideal way to do what you want would not require virtio-gpu at all. It would be to have your L1 either have a whole GPU attached, or some individial VFs. If it was a whole GPU, and you wanted to, you could then partition it. (Whether that's secure of course depends on how good a job the GPU vendor has done.) Then GPUs or VFs could be through to guests. If VFIO is not working inside your guest without noiommu, the best thing to do would be to get your L1 to have an IOMMU device, but I understand that GCP might not offer that. --=-=-= Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEH9wgcxqlHM/ARR3h+dvtSFmyccAFAmbLDzoACgkQ+dvtSFmy ccC3aw/+ON8ic0Vqzl6Fzz4kHwP57e6lZoLlBMEFarMQnJxcllnyODvX5IKPMiD+ ygevqVVK+VvaBZ6XfmC0HMFvaUQacKIIxyuYQ27B5e0hJbtYtl+HoQ0ChVZheFv4 FwYvi+CJNkwwaFAWDIw/8cvT90VokNtxq7tTsh5c8RklZpwB+ot9IqCIBu7Iv0B6 A903akMEWAvYqkNfakHa7skXL3RQ68hvbUKNHSLZrxZVmyan1ZXmAl7Zt/xKcfE8 Z7RwLsMaU/axbkm/rsHcphVa0wZFwiNbgxAQV0upqujQwTh1Y/ojUcKPoNlZJaSk Zz2ybsYr1j3pGrOI/Znoxz+T7yI3Rb7CCMn7AwWll/6GsL1zW/qF98/fPaxfiHXj w07h2JDd7JaCtjPaN2u6MCk5wDQMetc85+ATZ1AkQMo4MzoDQNhDMVv/Au7L69U1 jcSDbQ43roGjWjW5sfTR1QtHzVR5CUNZbhXBaUMfxywCXOcRK23d+8LiOxZRSY0h 4yQ4fAhb8EKFnGXnSt0Xar5zapxqGDIeS/1R3TRIQSHlairCWrg0UMRT4cnbgTaT qyfVZWLLD2I2XDgYrDWNph0UEdwzri/PULrF7hNKExmrj1m3CcNRDw88AymVNAzK nMA2xbtot/CaWk2Ub5NDNso1HCE/rCYrcvQXSNauTiDxj4Ow/OU= =MK7c -----END PGP SIGNATURE----- --=-=-=--