From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from atuin.qyliss.net (localhost [IPv6:::1]) by atuin.qyliss.net (Postfix) with ESMTP id 5BB38125E5; Wed, 17 Sep 2025 07:30:01 +0000 (UTC) Received: by atuin.qyliss.net (Postfix, from userid 993) id 625F81257E; Wed, 17 Sep 2025 07:29:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 4.0.1 (2024-03-26) on atuin.qyliss.net X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DMARC_MISSING,RCVD_IN_DNSWL_LOW,SPF_HELO_PASS autolearn=unavailable autolearn_force=no version=4.0.1 Received: from fout-a5-smtp.messagingengine.com (fout-a5-smtp.messagingengine.com [103.168.172.148]) by atuin.qyliss.net (Postfix) with ESMTPS id BABAC1257A for ; Wed, 17 Sep 2025 07:29:50 +0000 (UTC) Received: from phl-compute-12.internal (phl-compute-12.internal [10.202.2.52]) by mailfout.phl.internal (Postfix) with ESMTP id 6B783EC021C; Wed, 17 Sep 2025 03:29:48 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-12.internal (MEProxy); Wed, 17 Sep 2025 03:29:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alyssa.is; h=cc :cc:content-type:content-type:date:date:from:from:in-reply-to :message-id:mime-version:reply-to:subject:subject:to:to; s=fm3; t=1758094188; x=1758180588; bh=UQf0rUv8/dP/FGa0UOKUwrO+zzEhsANP JYmShMHQins=; b=JXV23GA0YKBSHfExu0cvhAESr+/YOgb0W0+QRhCZ0N8+RHH1 2UDW6rK7VgXEeDpjLZ2anP4pRWqpKpeRodVtRPTdt2WNQvvv6q1K2WM87lJE0UrS NfUr6hWOspWtUXbZ5D6ohlANsLJ9GTKwWD96Ak35X2b+hxlWNhRExpiVVIpK5C/K 7suxTFkxUo+XBV4G6s1Amxpfuq4si8ea4EddRaBTMIxz02mX1E6fdHd0OrRKaoP0 Qo68P8HtpLpyT9oCID092MUMRSF40zoOXmOwxRLfWdCVL1QNkv+sO4Aol823hVOg QEMo4CrIsuOB1/fz2Yyeuj7RD8o28FBkM+M5ow== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-type:content-type:date:date :feedback-id:feedback-id:from:from:in-reply-to:message-id :mime-version:reply-to:subject:subject:to:to:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1758094188; x= 1758180588; bh=UQf0rUv8/dP/FGa0UOKUwrO+zzEhsANPJYmShMHQins=; b=l 6YpNPUywopoxhDXqs0JlQGVV20uzrVMDCB6T5E2UWl8hJ53i2CpYmWcjkTYFM9eW pl6ZQ5RtN1Bfrthq2FyhYGE2cnTsRZuoHcTRerEGlfD2WaQUxF8R0ld/m2awm359 HUWmXQ2kqGMcFk+iyebMCB7ZEp4FIY7We5gOAfapqa7AZJHjWIwL1V+6yAqcwAvK uLOGXwX87o3hVYuXQIvlwqdUZVxuUpyEQjT1SPskRVB9Peitf7suVK3IUX58q13T UDFuspwCDD0B46KoBAXbd2DO2+tCK0NxtRiGNOKXcG8A2PtMyULJbIqYftpXQWIp taS1bXmcTWbMY/68Ej93g== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeffedrtdeggdegvdekhecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkfggtgesghdtreertddtjeenucfhrhhomheptehlhihsshgrucftohhs shcuoehhihesrghlhihsshgrrdhisheqnecuggftrfgrthhtvghrnhepudeffeeijeelke ekuddvieekkeegieehheevgedukeehvdeffeekuddtgefhtdejnecuffhomhgrihhnpehk vghrnhgvlhdrohhrghdphihouhhtuhgsvgdrtghomhdpnhhlnhgvthdrnhhlpdhgihhthh husgdrtghomhdpshhpvggtthhruhhmqdhoshdrohhrghenucevlhhushhtvghrufhiiigv pedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehhihesrghlhihsshgrrdhishdpnhgspg hrtghpthhtohepfedpmhhouggvpehsmhhtphhouhhtpdhrtghpthhtohephihurhgvkhgr segthigsvghrtghhrghoshdruggvvhdprhgtphhtthhopeguvghmihhosggvnhhouhhrse hgmhgrihhlrdgtohhmpdhrtghpthhtohepughishgtuhhsshesshhpvggtthhruhhmqdho shdrohhrgh X-ME-Proxy: Feedback-ID: i12284293:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 17 Sep 2025 03:29:47 -0400 (EDT) Received: by rock.qyliss.net (Postfix, from userid 1000) id 651F224BC83; Wed, 17 Sep 2025 09:29:36 +0200 (CEST) From: Alyssa Ross To: discuss@spectrum-os.org Subject: This Week in Spectrum, 2025-W37 Date: Wed, 17 Sep 2025 09:29:26 +0200 Message-ID: <87segl1qqx.fsf@alyssa.is> MIME-Version: 1.0 Content-Type: multipart/signed; boundary="=-=-="; micalg=pgp-sha512; protocol="application/pgp-signature" Message-ID-Hash: LCCJ2BZDRBRDQB6CAWRILQBXDISYMLOM X-Message-ID-Hash: LCCJ2BZDRBRDQB6CAWRILQBXDISYMLOM X-MailFrom: hi@alyssa.is X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; header-match-discuss.spectrum-os.org-0; header-match-discuss.spectrum-os.org-1; header-match-discuss.spectrum-os.org-2; header-match-discuss.spectrum-os.org-3; header-match-discuss.spectrum-os.org-4; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Demi Marie Obenour , Yureka Lilian X-Mailman-Version: 3.3.9 Precedence: list List-Id: General high-level discussion about Spectrum Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: --=-=-= Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Decided I'd rather get this out late than miss another week. There was no This Week in Spectrum for 2025-W36, because I was too tired after KVM Forum and NixCon to get it out on time, and in the last week I was in the hospital for several days for various tests, including when I'd usually send this out. Anyway, with two and a bit weeks to cover, this is going to be a long one. KVM Forum =2D-------- Was nice to put faces on names from mailing lists =E2=80=94 will make it ea= sier to approach people online to discuss new KVM things, I think. The most interesting talk for me was "IOMMU in rust-vmm, and new FUSE+VDUSE use cases"[1], in which I learned: =E2=80=A2 virtiofsd is going to get support for additional backends, e.g. = NFS and cloud storage. =E2=80=A2 A vhost-user=E2=86=94VDUSE bridge is in development, which will = allow a vhost-user device to be used by the kernel running that device. This raises the possibility of moving device implementations out of the host kernel. =E2=80=A2 virtio-vhost-user didn't end up getting through standardization = and implementation not for any fundamental reason, but just because nobody was sufficiently motivated to make it happen. This also came up again just yesterday on the virtualization list[2]. It would be nice to see some momentum again towards moving device implementations into VMs. [1]: https://lore.kernel.org/virtualization/CAJSP0QWV+=3D+v9Z0wU9qJcdToDKBn= WiGKzOVvAsdyTOtES=3DoFsw@mail.gmail.com/ [2]: https://www.youtube.com/watch?v=3DqsFc234tzz4&pp=3DygUOaW9tbXUgcnVzdC1= 2bW0%3D NixCon =2D----- KVM Forum and NixCon partially overlapped, so I only managed to make the second half of the latter. I didn't see any talks, but had some good conversations, mostly in the areas of documentation (which I hope we'll get a lot of work done on next year) and funding / business model stuff, which will be very important to get right to keep the project going after NGI ends. Speaking of which: NGI Zero Commons Business Circle =2D------------------------------- As recipients of an NGI Zero grant, we have access to various support services[3]. One of these is called "Business Circle", and provides business consulting and mentorship. I had an initial call with them after I got home after NixCon. Not much to report yet, but it does sound like we'll have lots of support available with legal etc. stuff if we need it to take the project forward. [3]: https://nlnet.nl/NGI0/services/ Cloud Hypervisor =2D--------------- Demi and I are currently working on sandboxing services on the Spectrum host =E2=80=94 best effort defense in depth, since these services are part = of our TCB anyway. One easy win there, I thought, would have been enabling Cloud Hypervisor's support for locking itself down with landlock, preventing it from accessing any non-allowlisted files. This should have been just a case of turning it on, and passing it a couple of extra paths to allow access to so VMs were allowed to have new devices passed through to them after creation, but I ended up finding quite a few bugs in the implementation in Cloud Hypervisor which had to be fixed first. The default rules were overly strict, resulting in VSOCK being non-functional if enabled[4], and in it failing to start at all on many aarch64 machines[5], and I also found that if landlock was requested, but the kernel didn't support it, Cloud Hypervisor would just start anyway, without landlock[6]. I've fixed all of these upstream now, and conveniently, a new Cloud Hypervisor release was scheduled for a couple of days afterwards, so these changes have already made their way into Spectrum. I haven't committed the Spectrum-side landlock stuff though, as I want to test it a bit more first. [4]: https://github.com/cloud-hypervisor/cloud-hypervisor/pull/7334 [5]: https://github.com/cloud-hypervisor/cloud-hypervisor/pull/7331 [6]: https://github.com/cloud-hypervisor/cloud-hypervisor/pull/7335 rust-vmm =2D------- The next step towards sound support in Spectrum is running vhost-device-sound on the host, but we can't currently do this robustly (and without polling), because vhost-device-sound doesn't support any form of socket activation or readiness notification. I've been working on adding support for the former, but it turned out not to be possible to implement with the current interface of the vhost-user-backend crate. In fact, I discovered, all the vhost-device devices create a new socket each time they're ready to accept a connection, meaning that, after the end of a connection, there's a window where a new connection can't be accepted. The right thing to do here is to reuse a socket between requests, and making that change will conveniently also make it straightforward to optionally pass in a socket provided by the service manager rather than creating it in the vhost-device program, so I changed the vhost-user-backend API[7], and am now waiting for a release, at which point I'll submit my changes to vhost-device-sound to update vhost-user-backend, fix the race between connections, and add support for using an externally provided socket. [7]: https://github.com/rust-vmm/vhost/pull/321 udev =2D--- Demi has been working on switching the host system from mdevd to systemd-udevd, with the motivation being that systemd-udevd comes with a big database of device quirks, which are likely to be beneficial to us in our aim of broad hardware support. This was originally done as part of her experiment with booting Spectrum with systemd[8], but has now been extracted into a standalone series[9], which is currently awaiting review. One of the complications of this, as I understand it, is that unlike every other service we run, we're not confident it's okay to start services that depend on udev as soon as the udev control socket is created, so we needed an adapter from systemd's readiness notification protocol to s6's. [8]: https://spectrum-os.org/lists/archives/spectrum-devel/20250904-systemd= -v1-0-2a63b790a913@gmail.com/ [9]: https://spectrum-os.org/lists/archives/spectrum-devel/20250913-udev-v1= -0-eade4ab8f2b4@gmail.com/ xdp-forwarder =2D------------ Yureka submitted a new version of her program for forwarding packets between two network interfaces[10], which will be part of our new networking stack. The idea here is that we can get the program reviewed and committed without having to wait for everything to be ready to switch to it. There were some more review comments from Demi and me, but nothing major, so I expect we'll see this part of the networking work finished soon. [10]: https://spectrum-os.org/lists/archives/spectrum-devel/20250906141228.= 2357630-1-yureka@cyberchaos.dev/ Ergonomics and other small improvements =2D-------------------------------------- Demi has been submitting a lot of small fixes and improvements for the Spectrum development experience. Between conferences and the hospital I haven't had a chance to look at many of these yet, but I did apply a few of the most straightforward ones already. Of course, there have also been all the usual recurring things like server updates, Nixpkgs reviews and contributions, and so on, but this is getting far too long already, so I'm going to leave it here. Hopefully at the end of this week I'll be able to get back to regular updates of reasonable length=E2=80=A6 --=-=-= Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQIzBAEBCgAdFiEEH9wgcxqlHM/ARR3h+dvtSFmyccAFAmjKY1cACgkQ+dvtSFmy ccALOw/9FU4aJvN3Ue+iGImK4+TBt70K5FM2rTIcD5e8HAkeGr/E5sz2P5+91TkK rt7OVg9SfkqHvfB4dl1OgeUlC/3whnEC3z54DKhfj6J5Lk1cJIQgAgPGzi5yXYaV 6cvU+IkneW+X/25emFWGRMcAKNSNofeNhoszCM63kcZ+F4szKxcNoxQUogib/Lf3 xLExvrB8c6+YOiGx/I7eSopfXdpOZD5yEToq1H4AI0kTea11gDB7zgY4dgFj1WCg FdPsXifjMFQ3cLjTc1ZxErB95ZjXiTLwijbemJv2MgSTbb6plXRLkqO/v6ELbUQu Ue5ApwFDlQcjLiRCN1aFF2wXBFLdTFUyuUmx/AStTqVK+6bOuesM/oG8xdmPSf9w XgrsbfjbonYT6wq781w19e4UVWo6GvUcKW8Ys8ZM0G9zp1vHlJ7VDQMFefziwBe7 KTYxBiyp3Za3DKEA5UQFpDlR953RhtW9PtdIRuCa/DNqu1l1P06CzxdPSLlww37y 6HUZRKgtnNoGuF4k3sjkFbjpwdfePJMEOq1zI0qt+X5KPr7oq83LkOd2204fpt70 AHef8evhn/xiOiw/MR+Nfp2TjVKawU4pn9450mPbfb/q+wjfhhsKZZ5k2niLJZix lFPPIJO/Gcp0CxuUkk+eTURdiljAAE8NmfukcXQ0U0joOAQQerU= =ZKVX -----END PGP SIGNATURE----- --=-=-=--