patches and low-level development discussion
 help / color / mirror / code / Atom feed
* virtio-msg inter-VM transport vs virtio-vhost-user
@ 2026-03-03 17:56 Demi Marie Obenour
  2026-03-04  8:26 ` Bertrand Marquis
  2026-03-04 16:01 ` Edgar E. Iglesias
  0 siblings, 2 replies; 5+ messages in thread
From: Demi Marie Obenour @ 2026-03-03 17:56 UTC (permalink / raw)
  To: Alyssa Ross, Bo Chen, Rob Bradford, Wei Liu, Sebastien Boeuf,
	qemu-devel, dev, Spectrum OS Development, Michael S. Tsirkin,
	Stefano Garzarella, Alex Bennée, Manos Pitsidianakis,
	Marc-André Lureau, virtio-comment


[-- Attachment #1.1.1: Type: text/plain, Size: 2133 bytes --]

Spectrum (https://spectrum-os.org) is going to be implementing
virtio devices outside of the host.  One proposed method of doing
this is virtio-vhost-user, which is a virtio device that allows a
VM to expose a vhost-user device to another VM.  For instance, one
could assign a NIC to one VM and have it provide a vhost-user-net
device for use by a different VM.

I brought this up on the KVM/QEMU community call today.  Alex Bennée
recommended using virtio-msg instead.  However, I have a few concerns
with this:

1. Virtio-msg buses are specific to a given hypervisor or (in the
   case of FF-A) to a given CPU architecture.  None of the current
   buses support KVM on platforms other than Arm64.  Therefore,
   a brand-new bus would be needed.

2. Virtio-msg requires not-yet-upstream drivers in both the frontend
   (the VM using the device) and the backend (the VM providing the
   device).  Vhost-user uses any of the existing transports, such as
   PCI or MMIO.  This means that upstream drivers can be used in the
   frontend, and also enables supports for Windows and other guests
   that lack support for virtio-msg.

3. Vhost-user is already widely deployed, so frontend implementations
   are quite well tested.  A KVM-specific virtio-msg transport would
   serve only one purpose: driver VMs (with assigned devices) on
   non-Arm64 platforms.  This is a quite niche use-case.  Therefore,
   I'm concerned that the needed frontend code will be poorly tested
   and bitrot.

Manos Pitsidianakis stated that vhost-user does not make sense in
this case.  Why is that?  Would it make sense to use virtio-msg
between VMM and its VM, and expose a vhost-user device to the
outside world?  What about having the virtio-vhost-user guest driver
emulate a virtio-msg transport, so that it can be used with any device
implementation supporting virtio-msg?

I would greatly appreciate any and all suggestions here.  This is a
serious project that is going to be used in production, but I want
to ensure that the design is the best possible.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 7253 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: virtio-msg inter-VM transport vs virtio-vhost-user
  2026-03-03 17:56 virtio-msg inter-VM transport vs virtio-vhost-user Demi Marie Obenour
@ 2026-03-04  8:26 ` Bertrand Marquis
  2026-03-04  9:18   ` Demi Marie Obenour
  2026-03-04 16:01 ` Edgar E. Iglesias
  1 sibling, 1 reply; 5+ messages in thread
From: Bertrand Marquis @ 2026-03-04  8:26 UTC (permalink / raw)
  To: Demi Marie Obenour
  Cc: Alyssa Ross, Bo Chen, Rob Bradford, Wei Liu, Sebastien Boeuf,
	qemu-devel@nongnu.org, dev@lists.cloudhypervisor.org,
	Spectrum OS Development, Michael S. Tsirkin, Stefano Garzarella,
	Alex Bennée, Manos Pitsidianakis, Marc-André Lureau,
	virtio-comment@lists.linux.dev

Hi Demi,

> On 3 Mar 2026, at 18:56, Demi Marie Obenour <demiobenour@gmail.com> wrote:
>
> Spectrum (https://spectrum-os.org) is going to be implementing
> virtio devices outside of the host.  One proposed method of doing
> this is virtio-vhost-user, which is a virtio device that allows a
> VM to expose a vhost-user device to another VM.  For instance, one
> could assign a NIC to one VM and have it provide a vhost-user-net
> device for use by a different VM.
>
> I brought this up on the KVM/QEMU community call today.  Alex Bennée
> recommended using virtio-msg instead.  However, I have a few concerns
> with this:
>
> 1. Virtio-msg buses are specific to a given hypervisor or (in the
>   case of FF-A) to a given CPU architecture.  None of the current
>   buses support KVM on platforms other than Arm64.  Therefore,
>   a brand-new bus would be needed.

Even FF-A is not useable at the moment with KVM as there is no FF-A
support for VM to VM in KVM (only host can communicate with the secure
world).

MMIO or PCI based virtio in KVM case is working and was not really
the target of our work.

pKVM is a target and is being worked on using FF-A but pVM to Host
virtio is still using PCI at the moment

Now creating a KVM specific bus reusing the concept of a FIFO
to transfer the messages between a VM and Host is definitely possible
to do and should not be that complex.

Right now i am working on backend implementation where:
- the bus implementation would be in linux kernel allowing several
implementations like FF-A, xen or others to be done as linux drivers.
- have a bus interface provided to user land so that Qemu could contain
the transport implementation but would not need to be modified for new
bus implementations.

Reusing this, it should be fairly simple to define a KVM bus and reuse
the other parts of the implementations.

>
> 2. Virtio-msg requires not-yet-upstream drivers in both the frontend
>   (the VM using the device) and the backend (the VM providing the
>   device).  Vhost-user uses any of the existing transports, such as
>   PCI or MMIO.  This means that upstream drivers can be used in the
>   frontend, and also enables supports for Windows and other guests
>   that lack support for virtio-msg.

This is definitely true and will always stay true. To use virtio-msg you
will need a new transport implementation for it when you want to use
it and the bus implementation(s) you require. To be used in windows
those part will also be needed.

In your example here you rely on existing MMIO or PCI transport and
existing vhost implementations. This is definitely quicker to do and use.
The goal is not replace what works but to provide solutions for use cases
where MMIO or PCI currently do not work or need to be optimized.

>
> 3. Vhost-user is already widely deployed, so frontend implementations
>   are quite well tested.  A KVM-specific virtio-msg transport would
>   serve only one purpose: driver VMs (with assigned devices) on
>   non-Arm64 platforms.  This is a quite niche use-case.  Therefore,
>   I'm concerned that the needed frontend code will be poorly tested
>   and bitrot.

We are in the process of defining the specification for virtio-msg and we
are working on implementations in parallel so our implementations are
for now not widely tested that is clear.
Now a specific KVM virtio message bus implementation would reuse
the transport and driver implementations which would be used on any
platforms in the future. I am not following your niche use-case here
and the poorly tested argument. Maybe i am missing something.

>
> Manos Pitsidianakis stated that vhost-user does not make sense in
> this case.  Why is that?  Would it make sense to use virtio-msg
> between VMM and its VM, and expose a vhost-user device to the
> outside world?  What about having the virtio-vhost-user guest driver
> emulate a virtio-msg transport, so that it can be used with any device
> implementation supporting virtio-msg?

I am not following your point here. You want to do virtio on top of virtio ?

Regards
Bertrand

>
> I would greatly appreciate any and all suggestions here.  This is a
> serious project that is going to be used in production, but I want
> to ensure that the design is the best possible.
> --
> Sincerely,
> Demi Marie Obenour (she/her/hers)
> <OpenPGP_0xB288B55FFF9C22C1.asc>

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: virtio-msg inter-VM transport vs virtio-vhost-user
  2026-03-04  8:26 ` Bertrand Marquis
@ 2026-03-04  9:18   ` Demi Marie Obenour
  2026-03-04 13:36     ` Bertrand Marquis
  0 siblings, 1 reply; 5+ messages in thread
From: Demi Marie Obenour @ 2026-03-04  9:18 UTC (permalink / raw)
  To: Bertrand Marquis
  Cc: Alyssa Ross, Bo Chen, Rob Bradford, Wei Liu, Sebastien Boeuf,
	qemu-devel@nongnu.org, dev@lists.cloudhypervisor.org,
	Spectrum OS Development, Michael S. Tsirkin, Stefano Garzarella,
	Alex Bennée, Manos Pitsidianakis, Marc-André Lureau,
	virtio-comment@lists.linux.dev


[-- Attachment #1.1.1: Type: text/plain, Size: 7515 bytes --]

On 3/4/26 03:26, Bertrand Marquis wrote:
> Hi Demi,
> 
>> On 3 Mar 2026, at 18:56, Demi Marie Obenour <demiobenour@gmail.com> wrote:
>>
>> Spectrum (https://spectrum-os.org) is going to be implementing
>> virtio devices outside of the host.  One proposed method of doing
>> this is virtio-vhost-user, which is a virtio device that allows a
>> VM to expose a vhost-user device to another VM.  For instance, one
>> could assign a NIC to one VM and have it provide a vhost-user-net
>> device for use by a different VM.
>>
>> I brought this up on the KVM/QEMU community call today.  Alex Bennée
>> recommended using virtio-msg instead.  However, I have a few concerns
>> with this:
>>
>> 1. Virtio-msg buses are specific to a given hypervisor or (in the
>>   case of FF-A) to a given CPU architecture.  None of the current
>>   buses support KVM on platforms other than Arm64.  Therefore,
>>   a brand-new bus would be needed.
> 
> Even FF-A is not useable at the moment with KVM as there is no FF-A
> support for VM to VM in KVM (only host can communicate with the secure
> world).

Ah, I thought that pKVM had pVM <=> pVM communication implemented.

> MMIO or PCI based virtio in KVM case is working and was not really
> the target of our work.
> 
> pKVM is a target and is being worked on using FF-A but pVM to Host
> virtio is still using PCI at the moment

Makes sense.  Does that involve any confidential computing-specific
code in the PCI subsystem, or has the subsystem as a whole been
hardened against malicious devices?  That matters for some of my
use-cases, where a device may be present but not authorized for use.
It's supposed to be passed through to a VM.

> Now creating a KVM specific bus reusing the concept of a FIFO
> to transfer the messages between a VM and Host is definitely possible
> to do and should not be that complex.

Can you give a rough estimate of how much code you are referring to?

> Right now i am working on backend implementation where:
> - the bus implementation would be in linux kernel allowing several
> implementations like FF-A, xen or others to be done as linux drivers.

What is the advantage of having the bus implementation in the kernel
as opposed to userspace?  Is it because the bus implementation is
responsible for protecting the kernel from malicious userspace?

In case you can't tell already, I'm a fan of microkernels :).

> - have a bus interface provided to user land so that Qemu could contain
> the transport implementation but would not need to be modified for new
> bus implementations.
> 
> Reusing this, it should be fairly simple to define a KVM bus and reuse
> the other parts of the implementations.

How long do you think it will be before this could be included in
upstream, mainline Linux?  I imagine this would need to wait for your
existing virtio-msg work to be upstreamed, and then I would need to
upstream a separate driver and spec.

>> 2. Virtio-msg requires not-yet-upstream drivers in both the frontend
>>   (the VM using the device) and the backend (the VM providing the
>>   device).  Vhost-user uses any of the existing transports, such as
>>   PCI or MMIO.  This means that upstream drivers can be used in the
>>   frontend, and also enables supports for Windows and other guests
>>   that lack support for virtio-msg.
> 
> This is definitely true and will always stay true. To use virtio-msg you
> will need a new transport implementation for it when you want to use
> it and the bus implementation(s) you require. To be used in windows
> those part will also be needed.
> 
> In your example here you rely on existing MMIO or PCI transport and
> existing vhost implementations. This is definitely quicker to do and use.
> The goal is not replace what works but to provide solutions for use cases
> where MMIO or PCI currently do not work or need to be optimized.

My use-case doesn't have the same restrictions, as long as config
space access is rare and denial of service is not a concern.

>> 3. Vhost-user is already widely deployed, so frontend implementations
>>   are quite well tested.  A KVM-specific virtio-msg transport would
>>   serve only one purpose: driver VMs (with assigned devices) on
>>   non-Arm64 platforms.  This is a quite niche use-case.  Therefore,
>>   I'm concerned that the needed frontend code will be poorly tested
>>   and bitrot.
> 
> We are in the process of defining the specification for virtio-msg and we
> are working on implementations in parallel so our implementations are
> for now not widely tested that is clear.
> Now a specific KVM virtio message bus implementation would reuse
> the transport and driver implementations which would be used on any
> platforms in the future. I am not following your niche use-case here
> and the poorly tested argument. Maybe i am missing something.
> 
>>
>> Manos Pitsidianakis stated that vhost-user does not make sense in
>> this case.  Why is that?  Would it make sense to use virtio-msg
>> between VMM and its VM, and expose a vhost-user device to the
>> outside world?  What about having the virtio-vhost-user guest driver
>> emulate a virtio-msg transport, so that it can be used with any device
>> implementation supporting virtio-msg?
> 
> I am not following your point here. You want to do virtio on top of virtio ?

Yes, actually!  More specifically, I want to use a virtio device in
one VM to implement a virtio device for a different VM.

To avoid confusion, and to match Xen terminology, I'll call the
userspace VMM that acts as a vhost-user server the *backend* VMM.
The *frontend* VMM is the one that acts as a vhost-user client, and is
completely unaware that anything special is happening in the backend.

A virtio vhost-user device is a virtio device that acts as a vhost-user
server.  For each vhost-user device it wants its guest to implement,
a VMM listens on a AF_UNIX socket, just like any other userspace
process would.  It then creates a virtio device in this guest, with
type *vhost-user device backend*.  This device has a single virtqueue,
and vhost-user messages are forwarded between that virtqueue and the
AF_UNIX socket.

The backend VMM needs to handle ancillary data specially.
For instance, memory that it mmap's is placed in a PCI BAR that is
accessible to the guest, much like a virtio-GPU blob object.  Vring
kick file descriptors are registered with the kernel as irqfds, so
when the frontend signals them, the backend VM receives an interrupt.
Vring call, error, and log file descriptors are registered with the
kernel as ioeventfds, so that the backend VM can trigger them by
MMIO writes.  It also registers and unregisters irqfds in response
to interrupts being unmasked and masked.

After everything has been set up, the backend VMM is *not* involved in
performance-critical operations.  The vhost-user shared memory regions
are mapped into its address space, so it can access them just like any
other memory.  Writing to an ioeventfd in the frontend VM causes KVM to
trigger an interrupt in the backend VM and visa versa.  I expect that
performance should be almost as good as a vhost-user device implemented
in userspace, but with the full isolation guarantees of a VM.

https://stefanha.github.io/virtio/vhost-user-slave.html has the spec
I am currently using.  It does a much better job explaining things.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 7253 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: virtio-msg inter-VM transport vs virtio-vhost-user
  2026-03-04  9:18   ` Demi Marie Obenour
@ 2026-03-04 13:36     ` Bertrand Marquis
  0 siblings, 0 replies; 5+ messages in thread
From: Bertrand Marquis @ 2026-03-04 13:36 UTC (permalink / raw)
  To: Demi Marie Obenour
  Cc: Alyssa Ross, Bo Chen, Rob Bradford, Wei Liu, Sebastien Boeuf,
	qemu-devel@nongnu.org, dev@lists.cloudhypervisor.org,
	Spectrum OS Development, Michael S. Tsirkin, Stefano Garzarella,
	Alex Bennée, Manos Pitsidianakis, Marc-André Lureau,
	virtio-comment@lists.linux.dev

Hi Demi,

> On 4 Mar 2026, at 10:18, Demi Marie Obenour <demiobenour@gmail.com> wrote:
>
> On 3/4/26 03:26, Bertrand Marquis wrote:
>> Hi Demi,
>>
>>> On 3 Mar 2026, at 18:56, Demi Marie Obenour <demiobenour@gmail.com> wrote:
>>>
>>> Spectrum (https://spectrum-os.org) is going to be implementing
>>> virtio devices outside of the host.  One proposed method of doing
>>> this is virtio-vhost-user, which is a virtio device that allows a
>>> VM to expose a vhost-user device to another VM.  For instance, one
>>> could assign a NIC to one VM and have it provide a vhost-user-net
>>> device for use by a different VM.
>>>
>>> I brought this up on the KVM/QEMU community call today.  Alex Bennée
>>> recommended using virtio-msg instead.  However, I have a few concerns
>>> with this:
>>>
>>> 1. Virtio-msg buses are specific to a given hypervisor or (in the
>>>  case of FF-A) to a given CPU architecture.  None of the current
>>>  buses support KVM on platforms other than Arm64.  Therefore,
>>>  a brand-new bus would be needed.
>>
>> Even FF-A is not useable at the moment with KVM as there is no FF-A
>> support for VM to VM in KVM (only host can communicate with the secure
>> world).
>
> Ah, I thought that pKVM had pVM <=> pVM communication implemented.

pKVM is different from KVM

Google is working on pKVM implementation and their primary use cases are
- Host to secure world
- pVM to secure world

pVM to host or pVM to pVM is not using virtio message for now.

>
>> MMIO or PCI based virtio in KVM case is working and was not really
>> the target of our work.
>>
>> pKVM is a target and is being worked on using FF-A but pVM to Host
>> virtio is still using PCI at the moment
>
> Makes sense.  Does that involve any confidential computing-specific
> code in the PCI subsystem, or has the subsystem as a whole been
> hardened against malicious devices?  That matters for some of my
> use-cases, where a device may be present but not authorized for use.
> It's supposed to be passed through to a VM.

That question would have to be answered by Google. But for now there
is no confidential compute involved I think.

>
>> Now creating a KVM specific bus reusing the concept of a FIFO
>> to transfer the messages between a VM and Host is definitely possible
>> to do and should not be that complex.
>
> Can you give a rough estimate of how much code you are referring to?

If you can reuse FIFO implementation from FF-A that would mean
implement a discovery system and FIFO setup and some ways to
share memory which should already exist so something around 500
to 1000 lines of code i would say.

Current FF-A bus implementation i am working on is a lot more complex
(supports FIFO but also direct or indirect messages and some FF-A way
to share memory) and is around 3000 lines of code.

>
>> Right now i am working on backend implementation where:
>> - the bus implementation would be in linux kernel allowing several
>> implementations like FF-A, xen or others to be done as linux drivers.
>
> What is the advantage of having the bus implementation in the kernel
> as opposed to userspace?  Is it because the bus implementation is
> responsible for protecting the kernel from malicious userspace?

Main reason is to handle memory sharing and mapping as long as
having an easy way to transfer messages through any hardware means.

Also such a design allows to keep Qemu implementation unchanged and
just load a new kernel module if you want to implement a new bus.

>
> In case you can't tell already, I'm a fan of microkernels :).

I am a xen maintainer and i did a PoC of virtio-msg using a baremetal
solution to experiment so the design is not bounded to Linux :-)
Also Google did an implementation in Rust in Trusty.

>
>> - have a bus interface provided to user land so that Qemu could contain
>> the transport implementation but would not need to be modified for new
>> bus implementations.
>>
>> Reusing this, it should be fairly simple to define a KVM bus and reuse
>> the other parts of the implementations.
>
> How long do you think it will be before this could be included in
> upstream, mainline Linux?  I imagine this would need to wait for your
> existing virtio-msg work to be upstreamed, and then I would need to
> upstream a separate driver and spec.

We are working on it.
A first RFC was published and we will continue to work on a more complete
solution (I am working on that right now).

We have several PoCs that you can find in the HVAC group page mentioned
in the cover letter.

Regarding a fully working VM to VM example with Qemu, I would say in
a month or 2 i will be able to share something.

>
>>> 2. Virtio-msg requires not-yet-upstream drivers in both the frontend
>>>  (the VM using the device) and the backend (the VM providing the
>>>  device).  Vhost-user uses any of the existing transports, such as
>>>  PCI or MMIO.  This means that upstream drivers can be used in the
>>>  frontend, and also enables supports for Windows and other guests
>>>  that lack support for virtio-msg.
>>
>> This is definitely true and will always stay true. To use virtio-msg you
>> will need a new transport implementation for it when you want to use
>> it and the bus implementation(s) you require. To be used in windows
>> those part will also be needed.
>>
>> In your example here you rely on existing MMIO or PCI transport and
>> existing vhost implementations. This is definitely quicker to do and use.
>> The goal is not replace what works but to provide solutions for use cases
>> where MMIO or PCI currently do not work or need to be optimized.
>
> My use-case doesn't have the same restrictions, as long as config
> space access is rare and denial of service is not a concern.

Then existing MMIO/PCI is definitely ok. You could switch in the future
to virtio-msg to enhance asynchronous handling without impacting the
rest of your stack.

>
>>> 3. Vhost-user is already widely deployed, so frontend implementations
>>>  are quite well tested.  A KVM-specific virtio-msg transport would
>>>  serve only one purpose: driver VMs (with assigned devices) on
>>>  non-Arm64 platforms.  This is a quite niche use-case.  Therefore,
>>>  I'm concerned that the needed frontend code will be poorly tested
>>>  and bitrot.
>>
>> We are in the process of defining the specification for virtio-msg and we
>> are working on implementations in parallel so our implementations are
>> for now not widely tested that is clear.
>> Now a specific KVM virtio message bus implementation would reuse
>> the transport and driver implementations which would be used on any
>> platforms in the future. I am not following your niche use-case here
>> and the poorly tested argument. Maybe i am missing something.
>>
>>>
>>> Manos Pitsidianakis stated that vhost-user does not make sense in
>>> this case.  Why is that?  Would it make sense to use virtio-msg
>>> between VMM and its VM, and expose a vhost-user device to the
>>> outside world?  What about having the virtio-vhost-user guest driver
>>> emulate a virtio-msg transport, so that it can be used with any device
>>> implementation supporting virtio-msg?
>>
>> I am not following your point here. You want to do virtio on top of virtio ?
>
> Yes, actually!  More specifically, I want to use a virtio device in
> one VM to implement a virtio device for a different VM.
>
> To avoid confusion, and to match Xen terminology, I'll call the
> userspace VMM that acts as a vhost-user server the *backend* VMM.
> The *frontend* VMM is the one that acts as a vhost-user client, and is
> completely unaware that anything special is happening in the backend.
>
> A virtio vhost-user device is a virtio device that acts as a vhost-user
> server.  For each vhost-user device it wants its guest to implement,
> a VMM listens on a AF_UNIX socket, just like any other userspace
> process would.  It then creates a virtio device in this guest, with
> type *vhost-user device backend*.  This device has a single virtqueue,
> and vhost-user messages are forwarded between that virtqueue and the
> AF_UNIX socket.
>
> The backend VMM needs to handle ancillary data specially.
> For instance, memory that it mmap's is placed in a PCI BAR that is
> accessible to the guest, much like a virtio-GPU blob object.  Vring
> kick file descriptors are registered with the kernel as irqfds, so
> when the frontend signals them, the backend VM receives an interrupt.
> Vring call, error, and log file descriptors are registered with the
> kernel as ioeventfds, so that the backend VM can trigger them by
> MMIO writes.  It also registers and unregisters irqfds in response
> to interrupts being unmasked and masked.
>
> After everything has been set up, the backend VMM is *not* involved in
> performance-critical operations.  The vhost-user shared memory regions
> are mapped into its address space, so it can access them just like any
> other memory.  Writing to an ioeventfd in the frontend VM causes KVM to
> trigger an interrupt in the backend VM and visa versa.  I expect that
> performance should be almost as good as a vhost-user device implemented
> in userspace, but with the full isolation guarantees of a VM.

So basically you want to have a solution to implement device emulation
from independent applications (instead of having everything inside the VMM).

Current layering i am working on would not allow this easily as the VMM in
my case is handling the transport side of things so it holds the dispatch
between devices.
To make that possible without a VMM we would need to have the dispatch
done in the kernel and then allow such use cases directly.
This is not impossible but does not correspond to what we are working on.

Cheers
Bertrand

>
> https://stefanha.github.io/virtio/vhost-user-slave.html has the spec
> I am currently using.  It does a much better job explaining things.
> --
> Sincerely,
> Demi Marie Obenour (she/her/hers)<OpenPGP_0xB288B55FFF9C22C1.asc>


IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: virtio-msg inter-VM transport vs virtio-vhost-user
  2026-03-03 17:56 virtio-msg inter-VM transport vs virtio-vhost-user Demi Marie Obenour
  2026-03-04  8:26 ` Bertrand Marquis
@ 2026-03-04 16:01 ` Edgar E. Iglesias
  1 sibling, 0 replies; 5+ messages in thread
From: Edgar E. Iglesias @ 2026-03-04 16:01 UTC (permalink / raw)
  To: Demi Marie Obenour
  Cc: Alyssa Ross, Bo Chen, Rob Bradford, Wei Liu, Sebastien Boeuf,
	qemu-devel, dev, Spectrum OS Development, Michael S. Tsirkin,
	Stefano Garzarella, Alex Bennée, Manos Pitsidianakis,
	Marc-André Lureau, virtio-comment

On Tue, Mar 03, 2026 at 12:56:46PM -0500, Demi Marie Obenour wrote:
> Spectrum (https://spectrum-os.org) is going to be implementing
> virtio devices outside of the host.  One proposed method of doing
> this is virtio-vhost-user, which is a virtio device that allows a
> VM to expose a vhost-user device to another VM.  For instance, one
> could assign a NIC to one VM and have it provide a vhost-user-net
> device for use by a different VM.
> 
> I brought this up on the KVM/QEMU community call today.  Alex Bennée
> recommended using virtio-msg instead.  However, I have a few concerns
> with this:
> 
> 1. Virtio-msg buses are specific to a given hypervisor or (in the
>    case of FF-A) to a given CPU architecture.  None of the current
>    buses support KVM on platforms other than Arm64.  Therefore,
>    a brand-new bus would be needed.

Hi Demi,

The virtio-msg AMP PCI bus works on KVM both for ARM and x86 and
any other arch, it's generic. These are the patches we posted to
qemu-devel:

https://lore.kernel.org/qemu-devel/20260224155721.612314-1-edgar.iglesias@gmail.com/


> 
> 2. Virtio-msg requires not-yet-upstream drivers in both the frontend
>    (the VM using the device) and the backend (the VM providing the
>    device).  Vhost-user uses any of the existing transports, such as
>    PCI or MMIO.  This means that upstream drivers can be used in the
>    frontend, and also enables supports for Windows and other guests
>    that lack support for virtio-msg.

Fair point, a bit of chicken and egg...

> 
> 3. Vhost-user is already widely deployed, so frontend implementations
>    are quite well tested.  A KVM-specific virtio-msg transport would
>    serve only one purpose: driver VMs (with assigned devices) on
>    non-Arm64 platforms.  This is a quite niche use-case.  Therefore,
>    I'm concerned that the needed frontend code will be poorly tested
>    and bitrot.
> 
> Manos Pitsidianakis stated that vhost-user does not make sense in
> this case.  Why is that?  Would it make sense to use virtio-msg
> between VMM and its VM, and expose a vhost-user device to the
> outside world?  What about having the virtio-vhost-user guest driver
> emulate a virtio-msg transport, so that it can be used with any device
> implementation supporting virtio-msg?
> 
> I would greatly appreciate any and all suggestions here.  This is a
> serious project that is going to be used in production, but I want
> to ensure that the design is the best possible.

I've not looked in detail at virtio-vhost-user but it seems to be
a bit similar to virtio-msg over PCI AMP with some differences.

My take is:
vhost-user is an interface designed for splitting virtio transport
from virtio device backend in user-space on the same OS instance.
The virtio-vhost-user device, enables the use of vhost-user across
VM's streching vhost-user's intended use. This is a great step but
Since vhost-user is not a virtio transport, It is not end-to-end in
the sense that you need QEMU to translate virtio-pci to vhost-user and
tunnel it over the virtio-vhost-user device to the backend.

Virtio-msg is from the start a transport meant to work between
heterogenous systems, different OS, OS instances even different
SoCs. It's end-to-end in the sense that if you have a front-end
driver in the front-end VM with an appropriate virtio-msg bus you can
talk directly with the backend without intermediate proxies,
potentially without VM exits.

Cheers,
Edgar

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-03-04 16:01 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-03 17:56 virtio-msg inter-VM transport vs virtio-vhost-user Demi Marie Obenour
2026-03-04  8:26 ` Bertrand Marquis
2026-03-04  9:18   ` Demi Marie Obenour
2026-03-04 13:36     ` Bertrand Marquis
2026-03-04 16:01 ` Edgar E. Iglesias

Code repositories for project(s) associated with this public inbox

	https://spectrum-os.org/git/crosvm
	https://spectrum-os.org/git/doc
	https://spectrum-os.org/git/mktuntap
	https://spectrum-os.org/git/nixpkgs
	https://spectrum-os.org/git/spectrum
	https://spectrum-os.org/git/ucspi-vsock
	https://spectrum-os.org/git/www

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).