patches and low-level development discussion
 help / color / mirror / code / Atom feed
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Demi Marie Obenour <demiobenour@gmail.com>
Cc: "Alyssa Ross" <hi@alyssa.is>, "Bo Chen" <bchen@crusoe.ai>,
	"Rob Bradford" <rbradford@meta.com>,
	"Wei Liu" <liuwe@microsoft.com>,
	"Sebastien Boeuf" <seb@rivosinc.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"dev@lists.cloudhypervisor.org" <dev@lists.cloudhypervisor.org>,
	"Spectrum OS Development" <devel@spectrum-os.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Stefano Garzarella" <sgarzare@redhat.com>,
	"Alex Bennée" <alex.bennee@linaro.org>,
	"Manos Pitsidianakis" <manos.pitsidianakis@linaro.org>,
	"Marc-André Lureau" <marcandre.lureau@redhat.com>,
	"virtio-comment@lists.linux.dev" <virtio-comment@lists.linux.dev>
Subject: Re: virtio-msg inter-VM transport vs virtio-vhost-user
Date: Wed, 4 Mar 2026 13:36:24 +0000	[thread overview]
Message-ID: <928B9B9B-5D43-445B-A1F2-8577DF73C326@arm.com> (raw)
In-Reply-To: <c0e4f57e-4803-4908-88fb-0258199edeb6@gmail.com>

Hi Demi,

> On 4 Mar 2026, at 10:18, Demi Marie Obenour <demiobenour@gmail.com> wrote:
>
> On 3/4/26 03:26, Bertrand Marquis wrote:
>> Hi Demi,
>>
>>> On 3 Mar 2026, at 18:56, Demi Marie Obenour <demiobenour@gmail.com> wrote:
>>>
>>> Spectrum (https://spectrum-os.org) is going to be implementing
>>> virtio devices outside of the host.  One proposed method of doing
>>> this is virtio-vhost-user, which is a virtio device that allows a
>>> VM to expose a vhost-user device to another VM.  For instance, one
>>> could assign a NIC to one VM and have it provide a vhost-user-net
>>> device for use by a different VM.
>>>
>>> I brought this up on the KVM/QEMU community call today.  Alex Bennée
>>> recommended using virtio-msg instead.  However, I have a few concerns
>>> with this:
>>>
>>> 1. Virtio-msg buses are specific to a given hypervisor or (in the
>>>  case of FF-A) to a given CPU architecture.  None of the current
>>>  buses support KVM on platforms other than Arm64.  Therefore,
>>>  a brand-new bus would be needed.
>>
>> Even FF-A is not useable at the moment with KVM as there is no FF-A
>> support for VM to VM in KVM (only host can communicate with the secure
>> world).
>
> Ah, I thought that pKVM had pVM <=> pVM communication implemented.

pKVM is different from KVM

Google is working on pKVM implementation and their primary use cases are
- Host to secure world
- pVM to secure world

pVM to host or pVM to pVM is not using virtio message for now.

>
>> MMIO or PCI based virtio in KVM case is working and was not really
>> the target of our work.
>>
>> pKVM is a target and is being worked on using FF-A but pVM to Host
>> virtio is still using PCI at the moment
>
> Makes sense.  Does that involve any confidential computing-specific
> code in the PCI subsystem, or has the subsystem as a whole been
> hardened against malicious devices?  That matters for some of my
> use-cases, where a device may be present but not authorized for use.
> It's supposed to be passed through to a VM.

That question would have to be answered by Google. But for now there
is no confidential compute involved I think.

>
>> Now creating a KVM specific bus reusing the concept of a FIFO
>> to transfer the messages between a VM and Host is definitely possible
>> to do and should not be that complex.
>
> Can you give a rough estimate of how much code you are referring to?

If you can reuse FIFO implementation from FF-A that would mean
implement a discovery system and FIFO setup and some ways to
share memory which should already exist so something around 500
to 1000 lines of code i would say.

Current FF-A bus implementation i am working on is a lot more complex
(supports FIFO but also direct or indirect messages and some FF-A way
to share memory) and is around 3000 lines of code.

>
>> Right now i am working on backend implementation where:
>> - the bus implementation would be in linux kernel allowing several
>> implementations like FF-A, xen or others to be done as linux drivers.
>
> What is the advantage of having the bus implementation in the kernel
> as opposed to userspace?  Is it because the bus implementation is
> responsible for protecting the kernel from malicious userspace?

Main reason is to handle memory sharing and mapping as long as
having an easy way to transfer messages through any hardware means.

Also such a design allows to keep Qemu implementation unchanged and
just load a new kernel module if you want to implement a new bus.

>
> In case you can't tell already, I'm a fan of microkernels :).

I am a xen maintainer and i did a PoC of virtio-msg using a baremetal
solution to experiment so the design is not bounded to Linux :-)
Also Google did an implementation in Rust in Trusty.

>
>> - have a bus interface provided to user land so that Qemu could contain
>> the transport implementation but would not need to be modified for new
>> bus implementations.
>>
>> Reusing this, it should be fairly simple to define a KVM bus and reuse
>> the other parts of the implementations.
>
> How long do you think it will be before this could be included in
> upstream, mainline Linux?  I imagine this would need to wait for your
> existing virtio-msg work to be upstreamed, and then I would need to
> upstream a separate driver and spec.

We are working on it.
A first RFC was published and we will continue to work on a more complete
solution (I am working on that right now).

We have several PoCs that you can find in the HVAC group page mentioned
in the cover letter.

Regarding a fully working VM to VM example with Qemu, I would say in
a month or 2 i will be able to share something.

>
>>> 2. Virtio-msg requires not-yet-upstream drivers in both the frontend
>>>  (the VM using the device) and the backend (the VM providing the
>>>  device).  Vhost-user uses any of the existing transports, such as
>>>  PCI or MMIO.  This means that upstream drivers can be used in the
>>>  frontend, and also enables supports for Windows and other guests
>>>  that lack support for virtio-msg.
>>
>> This is definitely true and will always stay true. To use virtio-msg you
>> will need a new transport implementation for it when you want to use
>> it and the bus implementation(s) you require. To be used in windows
>> those part will also be needed.
>>
>> In your example here you rely on existing MMIO or PCI transport and
>> existing vhost implementations. This is definitely quicker to do and use.
>> The goal is not replace what works but to provide solutions for use cases
>> where MMIO or PCI currently do not work or need to be optimized.
>
> My use-case doesn't have the same restrictions, as long as config
> space access is rare and denial of service is not a concern.

Then existing MMIO/PCI is definitely ok. You could switch in the future
to virtio-msg to enhance asynchronous handling without impacting the
rest of your stack.

>
>>> 3. Vhost-user is already widely deployed, so frontend implementations
>>>  are quite well tested.  A KVM-specific virtio-msg transport would
>>>  serve only one purpose: driver VMs (with assigned devices) on
>>>  non-Arm64 platforms.  This is a quite niche use-case.  Therefore,
>>>  I'm concerned that the needed frontend code will be poorly tested
>>>  and bitrot.
>>
>> We are in the process of defining the specification for virtio-msg and we
>> are working on implementations in parallel so our implementations are
>> for now not widely tested that is clear.
>> Now a specific KVM virtio message bus implementation would reuse
>> the transport and driver implementations which would be used on any
>> platforms in the future. I am not following your niche use-case here
>> and the poorly tested argument. Maybe i am missing something.
>>
>>>
>>> Manos Pitsidianakis stated that vhost-user does not make sense in
>>> this case.  Why is that?  Would it make sense to use virtio-msg
>>> between VMM and its VM, and expose a vhost-user device to the
>>> outside world?  What about having the virtio-vhost-user guest driver
>>> emulate a virtio-msg transport, so that it can be used with any device
>>> implementation supporting virtio-msg?
>>
>> I am not following your point here. You want to do virtio on top of virtio ?
>
> Yes, actually!  More specifically, I want to use a virtio device in
> one VM to implement a virtio device for a different VM.
>
> To avoid confusion, and to match Xen terminology, I'll call the
> userspace VMM that acts as a vhost-user server the *backend* VMM.
> The *frontend* VMM is the one that acts as a vhost-user client, and is
> completely unaware that anything special is happening in the backend.
>
> A virtio vhost-user device is a virtio device that acts as a vhost-user
> server.  For each vhost-user device it wants its guest to implement,
> a VMM listens on a AF_UNIX socket, just like any other userspace
> process would.  It then creates a virtio device in this guest, with
> type *vhost-user device backend*.  This device has a single virtqueue,
> and vhost-user messages are forwarded between that virtqueue and the
> AF_UNIX socket.
>
> The backend VMM needs to handle ancillary data specially.
> For instance, memory that it mmap's is placed in a PCI BAR that is
> accessible to the guest, much like a virtio-GPU blob object.  Vring
> kick file descriptors are registered with the kernel as irqfds, so
> when the frontend signals them, the backend VM receives an interrupt.
> Vring call, error, and log file descriptors are registered with the
> kernel as ioeventfds, so that the backend VM can trigger them by
> MMIO writes.  It also registers and unregisters irqfds in response
> to interrupts being unmasked and masked.
>
> After everything has been set up, the backend VMM is *not* involved in
> performance-critical operations.  The vhost-user shared memory regions
> are mapped into its address space, so it can access them just like any
> other memory.  Writing to an ioeventfd in the frontend VM causes KVM to
> trigger an interrupt in the backend VM and visa versa.  I expect that
> performance should be almost as good as a vhost-user device implemented
> in userspace, but with the full isolation guarantees of a VM.

So basically you want to have a solution to implement device emulation
from independent applications (instead of having everything inside the VMM).

Current layering i am working on would not allow this easily as the VMM in
my case is handling the transport side of things so it holds the dispatch
between devices.
To make that possible without a VMM we would need to have the dispatch
done in the kernel and then allow such use cases directly.
This is not impossible but does not correspond to what we are working on.

Cheers
Bertrand

>
> https://stefanha.github.io/virtio/vhost-user-slave.html has the spec
> I am currently using.  It does a much better job explaining things.
> --
> Sincerely,
> Demi Marie Obenour (she/her/hers)<OpenPGP_0xB288B55FFF9C22C1.asc>


IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

  reply	other threads:[~2026-03-04 13:37 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-03 17:56 virtio-msg inter-VM transport vs virtio-vhost-user Demi Marie Obenour
2026-03-04  8:26 ` Bertrand Marquis
2026-03-04  9:18   ` Demi Marie Obenour
2026-03-04 13:36     ` Bertrand Marquis [this message]
2026-03-04 16:01 ` Edgar E. Iglesias

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=928B9B9B-5D43-445B-A1F2-8577DF73C326@arm.com \
    --to=bertrand.marquis@arm.com \
    --cc=alex.bennee@linaro.org \
    --cc=bchen@crusoe.ai \
    --cc=demiobenour@gmail.com \
    --cc=dev@lists.cloudhypervisor.org \
    --cc=devel@spectrum-os.org \
    --cc=hi@alyssa.is \
    --cc=liuwe@microsoft.com \
    --cc=manos.pitsidianakis@linaro.org \
    --cc=marcandre.lureau@redhat.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=rbradford@meta.com \
    --cc=seb@rivosinc.com \
    --cc=sgarzare@redhat.com \
    --cc=virtio-comment@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://spectrum-os.org/git/crosvm
	https://spectrum-os.org/git/doc
	https://spectrum-os.org/git/mktuntap
	https://spectrum-os.org/git/nixpkgs
	https://spectrum-os.org/git/spectrum
	https://spectrum-os.org/git/ucspi-vsock
	https://spectrum-os.org/git/www

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).