public inbox for cygwin-developers@cygwin.com
 help / color / mirror / Atom feed
* The unreliability of AF_UNIX datagram sockets
@ 2021-04-27 15:47 Ken Brown
  2021-04-29 11:05 ` Corinna Vinschen
  0 siblings, 1 reply; 26+ messages in thread
From: Ken Brown @ 2021-04-27 15:47 UTC (permalink / raw)
  To: cygwin-devel

[-- Attachment #1: Type: text/plain, Size: 1754 bytes --]

This is a follow-up to

  https://cygwin.com/pipermail/cygwin/2021-April/248383.html

I'm attaching a test case slightly simpler than the one posted by the OP in that 
thread.  This is a client/server scenario, with non-blocking AF_UNIX datagram 
sockets.  The client writes COUNT messages while the server is playing with his 
toes.  Then the server reads the messages.

If COUNT is too big, the expectation is that the client's sendto call will 
eventually return EAGAIN.  This is what happens on Linux.  On Cygwin, however, 
there is never a sendto error; the program ends when recv fails with EAGAIN, 
indicating that some messages were dropped.

I think what's happening is that WSASendTo is silently dropping messages without 
returning an error.  I guess this is acceptable because of the documented 
unreliability of AF_INET datagram sockets.  But AF_UNIX datagram sockets are 
supposed to be reliable.

I can't think of anything that Cygwin can do about this (but I would love to be 
proven wrong).  My real reason for raising the issue is that, as we recently 
discussed in a different thread, maybe it's time for Cygwin to start using 
native Windows AF_UNIX sockets.  But then we would still have to come up with 
our own implementation of AF_UNIX datagram sockets, and it seems that we can't 
simply use the current implementation.  AFAICT, Mark's suggestion of using 
message queues is the best idea so far.

I'm willing to start working on the switch to native AF_UNIX sockets.  (I'm 
frankly getting bored with working on the pipe implementation, and this doesn't 
really seem like it has much of a future.)  But I'd like to be confident that 
there's a good solution to the datagram problem before I invest too much time in 
this.

Ken


[-- Attachment #2: dgram_loss.c --]
[-- Type: text/plain, Size: 1764 bytes --]

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <errno.h>
#include <unistd.h>

#define SOCK_PATH "/tmp/mysocket"

int sfd;

int
server ()
{
  struct sockaddr_un un;

  if (unlink (SOCK_PATH) < 0 && errno != ENOENT)
    {
      printf ("unlink: %d <%s>\n", errno, strerror (errno));
      return -1;
    }
  sfd = socket (AF_UNIX, SOCK_DGRAM | SOCK_NONBLOCK, 0);
  if (sfd < 0)
    {
      printf ("SRV socket: %d <%s>\n", errno, strerror (errno));
      return -1;
    }
  memset (&un, 0, sizeof un);
  un.sun_family = AF_UNIX;
  strcpy (un.sun_path, SOCK_PATH);
  if (bind (sfd, (const struct sockaddr *) &un, sizeof un) < 0)
    {
      printf ("SRV bind: %d <%s>\n", errno, strerror (errno));
      return -1;
    }
  return 0;
}

int
main ()
{
  int fd;
  struct sockaddr_un un;

  fd = socket (AF_UNIX, SOCK_DGRAM | SOCK_NONBLOCK, 0);
  if (fd < 0)
    {
      printf ("socket: %d <%s>\n", errno, strerror (errno));
      return 1;
    }

  if (server ())
    return 2;

  memset (&un, 0, sizeof un);
  un.sun_family = AF_UNIX;
  strcpy (un.sun_path, SOCK_PATH);

#define COUNT 64 * 1024
  for (int i = 0; i < COUNT; i++)
    {
      if (sendto (fd, &i, sizeof i, 0, (struct sockaddr *) &un, sizeof un)
	  != sizeof i)
	{
	  printf ("sendto: %d <%s>, i = %d\n", errno, strerror (errno), i);
	  return 3;
	}
    }
  for (int i = 0; i < COUNT; i++)
    {
      int j = -1;
      ssize_t nr = recv (sfd, &j, sizeof j, 0);

      if (nr < 0)
	{
	  printf ("recv: %d <%s>, i = %d\n", errno, strerror (errno), i);
	  return 4;
	}
      if (nr != sizeof j)
	{
	  printf ("partial read, i = %d\n", i);
	  return 5;
	}
      if (i != j)
	printf ("i = %d, j = %d\n", i, j);
    }
}

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-04-27 15:47 The unreliability of AF_UNIX datagram sockets Ken Brown
@ 2021-04-29 11:05 ` Corinna Vinschen
  2021-04-29 11:16   ` Corinna Vinschen
                     ` (2 more replies)
  0 siblings, 3 replies; 26+ messages in thread
From: Corinna Vinschen @ 2021-04-29 11:05 UTC (permalink / raw)
  To: cygwin-developers

On Apr 27 11:47, Ken Brown wrote:
> This is a follow-up to
> 
>  https://cygwin.com/pipermail/cygwin/2021-April/248383.html
> 
> I'm attaching a test case slightly simpler than the one posted by the OP in
> that thread.  This is a client/server scenario, with non-blocking AF_UNIX
> datagram sockets.  The client writes COUNT messages while the server is
> playing with his toes.  Then the server reads the messages.
> 
> If COUNT is too big, the expectation is that the client's sendto call will
> eventually return EAGAIN.  This is what happens on Linux.  On Cygwin,
> however, there is never a sendto error; the program ends when recv fails
> with EAGAIN, indicating that some messages were dropped.
> 
> I think what's happening is that WSASendTo is silently dropping messages
> without returning an error.  I guess this is acceptable because of the
> documented unreliability of AF_INET datagram sockets.  But AF_UNIX datagram
> sockets are supposed to be reliable.
> 
> I can't think of anything that Cygwin can do about this (but I would love to
> be proven wrong).  My real reason for raising the issue is that, as we
> recently discussed in a different thread, maybe it's time for Cygwin to
> start using native Windows AF_UNIX sockets.  But then we would still have to
> come up with our own implementation of AF_UNIX datagram sockets, and it
> seems that we can't simply use the current implementation.  AFAICT, Mark's
> suggestion of using message queues is the best idea so far.
> 
> I'm willing to start working on the switch to native AF_UNIX sockets.  (I'm
> frankly getting bored with working on the pipe implementation, and this
          ^^^^^^^^^^^^^
I not really surprised, Windows pipe semantics are annoying.

> doesn't really seem like it has much of a future.)  But I'd like to be
> confident that there's a good solution to the datagram problem before I
> invest too much time in this.

Summary of our short discussion on IRC:

- Switching to SOCK_STREAM under the hood adds the necessary reliabilty
  but breaks DGRAM message boundaries.

- There appears to be no way in Winsock to handle send buffer overflow
  gracefully so that user space knows that messages have been discarded.
  Strange enoug there's a SIO_ENABLE_CIRCULAR_QUEUEING ioctl, but that
  just makes things worse, by dropping older messages in favor of the
  newer ones :-P

I think it should be possible to switch to STREAM sockets to emulate
DGRAM semantics.  Our advantage is that this is all local.  For all
practical purposes there's no chance data gets really lost.  Windows has
an almost indefinite send buffer.

If you look at the STREAM as a kind of tunneling layer for getting DGRAM
messages over the (local) line, the DGRAM content could simply be
encapsulated in a tunnel packet or frame, basically the same way the
new, boring AF_UNIX code does it.  A DGRAM message encapsulated in a
STREAM message always has a header which at least contains the length of
the actual DGRAM message.  So when the peer reads from the socket, it
always only reads the header until it's complete.  Then it knows how
much payload is expected and then it reads until the payload has been
received.

Ultimately this would even allow to emulate DGRAMs when using native
Windows AF_UNIX sockets.  Then we'd just have to keep the old code for
backward compat.

There's just one problem with this entire switch to non-pipes: Sending
descriptors between peers running under different accounts requires to
be able to switch the user context.  You need this if the sender is a
non-admin account to call ImpersonateNamedPipeClient in the receiver.
So we might need to keep the pipes even if just for the purpose of being
able to call ImpersonateNamedPipeClient...


Thoughts?


Thanks,
Corinna

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-04-29 11:05 ` Corinna Vinschen
@ 2021-04-29 11:16   ` Corinna Vinschen
  2021-04-29 14:38   ` Ken Brown
  2021-05-20 13:46   ` Ken Brown
  2 siblings, 0 replies; 26+ messages in thread
From: Corinna Vinschen @ 2021-04-29 11:16 UTC (permalink / raw)
  To: cygwin-developers

On Apr 29 13:05, Corinna Vinschen wrote:
> On Apr 27 11:47, Ken Brown wrote:
> > This is a follow-up to
> > 
> >  https://cygwin.com/pipermail/cygwin/2021-April/248383.html
> > 
> > I'm attaching a test case slightly simpler than the one posted by the OP in
> > that thread.  This is a client/server scenario, with non-blocking AF_UNIX
> > datagram sockets.  The client writes COUNT messages while the server is
> > playing with his toes.  Then the server reads the messages.
> > 
> > If COUNT is too big, the expectation is that the client's sendto call will
> > eventually return EAGAIN.  This is what happens on Linux.  On Cygwin,
> > however, there is never a sendto error; the program ends when recv fails
> > with EAGAIN, indicating that some messages were dropped.
> > 
> > I think what's happening is that WSASendTo is silently dropping messages
> > without returning an error.  I guess this is acceptable because of the
> > documented unreliability of AF_INET datagram sockets.  But AF_UNIX datagram
> > sockets are supposed to be reliable.
> > 
> > I can't think of anything that Cygwin can do about this (but I would love to
> > be proven wrong).  My real reason for raising the issue is that, as we
> > recently discussed in a different thread, maybe it's time for Cygwin to
> > start using native Windows AF_UNIX sockets.  But then we would still have to
> > come up with our own implementation of AF_UNIX datagram sockets, and it
> > seems that we can't simply use the current implementation.  AFAICT, Mark's
> > suggestion of using message queues is the best idea so far.
> > 
> > I'm willing to start working on the switch to native AF_UNIX sockets.  (I'm
> > frankly getting bored with working on the pipe implementation, and this
>           ^^^^^^^^^^^^^
> I not really surprised, Windows pipe semantics are annoying.
> 
> > doesn't really seem like it has much of a future.)  But I'd like to be
> > confident that there's a good solution to the datagram problem before I
> > invest too much time in this.
> 
> Summary of our short discussion on IRC:
> 
> - Switching to SOCK_STREAM under the hood adds the necessary reliabilty
>   but breaks DGRAM message boundaries.
> 
> - There appears to be no way in Winsock to handle send buffer overflow
>   gracefully so that user space knows that messages have been discarded.
>   Strange enoug there's a SIO_ENABLE_CIRCULAR_QUEUEING ioctl, but that
>   just makes things worse, by dropping older messages in favor of the
>   newer ones :-P
> 
> I think it should be possible to switch to STREAM sockets to emulate
> DGRAM semantics.  Our advantage is that this is all local.  For all
> practical purposes there's no chance data gets really lost.  Windows has
> an almost indefinite send buffer.
> 
> If you look at the STREAM as a kind of tunneling layer for getting DGRAM
> messages over the (local) line, the DGRAM content could simply be
> encapsulated in a tunnel packet or frame, basically the same way the
> new, boring AF_UNIX code does it.  A DGRAM message encapsulated in a
> STREAM message always has a header which at least contains the length of
> the actual DGRAM message.  So when the peer reads from the socket, it
> always only reads the header until it's complete.  Then it knows how
> much payload is expected and then it reads until the payload has been
> received.

Oh, btw., given this being local, and given that we always send a
defined packet length, with DGRAM max reliable packet size much smaller
than max STREAM packet size, there's almost no chance that the peer gets
an incomplete packet.  Unless, of course, user space requested a smaller
packet size than the sender sent.  In that case the remainder of the
packet is lost, but that's business as usual (MSG_TRUNC).  Obviously the
recv call has to read the entire packet and just discard the rest.


Corinna

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-04-29 11:05 ` Corinna Vinschen
  2021-04-29 11:16   ` Corinna Vinschen
@ 2021-04-29 14:38   ` Ken Brown
  2021-04-29 15:05     ` Corinna Vinschen
  2021-05-20 13:46   ` Ken Brown
  2 siblings, 1 reply; 26+ messages in thread
From: Ken Brown @ 2021-04-29 14:38 UTC (permalink / raw)
  To: cygwin-developers

On 4/29/2021 7:05 AM, Corinna Vinschen wrote:
> On Apr 27 11:47, Ken Brown wrote:
>> This is a follow-up to
>>
>>   https://cygwin.com/pipermail/cygwin/2021-April/248383.html
>>
>> I'm attaching a test case slightly simpler than the one posted by the OP in
>> that thread.  This is a client/server scenario, with non-blocking AF_UNIX
>> datagram sockets.  The client writes COUNT messages while the server is
>> playing with his toes.  Then the server reads the messages.
>>
>> If COUNT is too big, the expectation is that the client's sendto call will
>> eventually return EAGAIN.  This is what happens on Linux.  On Cygwin,
>> however, there is never a sendto error; the program ends when recv fails
>> with EAGAIN, indicating that some messages were dropped.
>>
>> I think what's happening is that WSASendTo is silently dropping messages
>> without returning an error.  I guess this is acceptable because of the
>> documented unreliability of AF_INET datagram sockets.  But AF_UNIX datagram
>> sockets are supposed to be reliable.
>>
>> I can't think of anything that Cygwin can do about this (but I would love to
>> be proven wrong).  My real reason for raising the issue is that, as we
>> recently discussed in a different thread, maybe it's time for Cygwin to
>> start using native Windows AF_UNIX sockets.  But then we would still have to
>> come up with our own implementation of AF_UNIX datagram sockets, and it
>> seems that we can't simply use the current implementation.  AFAICT, Mark's
>> suggestion of using message queues is the best idea so far.
>>
>> I'm willing to start working on the switch to native AF_UNIX sockets.  (I'm
>> frankly getting bored with working on the pipe implementation, and this
>            ^^^^^^^^^^^^^
> I not really surprised, Windows pipe semantics are annoying.
> 
>> doesn't really seem like it has much of a future.)  But I'd like to be
>> confident that there's a good solution to the datagram problem before I
>> invest too much time in this.
> 
> Summary of our short discussion on IRC:
> 
> - Switching to SOCK_STREAM under the hood adds the necessary reliabilty
>    but breaks DGRAM message boundaries.
> 
> - There appears to be no way in Winsock to handle send buffer overflow
>    gracefully so that user space knows that messages have been discarded.
>    Strange enoug there's a SIO_ENABLE_CIRCULAR_QUEUEING ioctl, but that
>    just makes things worse, by dropping older messages in favor of the
>    newer ones :-P
> 
> I think it should be possible to switch to STREAM sockets to emulate
> DGRAM semantics.  Our advantage is that this is all local.  For all
> practical purposes there's no chance data gets really lost.  Windows has
> an almost indefinite send buffer.
> 
> If you look at the STREAM as a kind of tunneling layer for getting DGRAM
> messages over the (local) line, the DGRAM content could simply be
> encapsulated in a tunnel packet or frame, basically the same way the
> new, boring AF_UNIX code does it.  A DGRAM message encapsulated in a
> STREAM message always has a header which at least contains the length of
> the actual DGRAM message.  So when the peer reads from the socket, it
> always only reads the header until it's complete.  Then it knows how
> much payload is expected and then it reads until the payload has been
> received.

This should work.  We could even use MSG_PEEK to read the header and then 
MSG_WAITALL to read the whole packet.

I'd be happy to try to implement this.  Do you want to create a branch (maybe 
topic/dgram or something like that) for working on it?

> Ultimately this would even allow to emulate DGRAMs when using native
> Windows AF_UNIX sockets.  Then we'd just have to keep the old code for
> backward compat.

Yep.

> There's just one problem with this entire switch to non-pipes: Sending
> descriptors between peers running under different accounts requires to
> be able to switch the user context.  You need this if the sender is a
> non-admin account to call ImpersonateNamedPipeClient in the receiver.
> So we might need to keep the pipes even if just for the purpose of being
> able to call ImpersonateNamedPipeClient...
> 
> 
> Thoughts?

Sounds great.  Thanks.

Ken

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-04-29 14:38   ` Ken Brown
@ 2021-04-29 15:05     ` Corinna Vinschen
  2021-04-29 15:18       ` Corinna Vinschen
  2021-04-29 16:44       ` Ken Brown
  0 siblings, 2 replies; 26+ messages in thread
From: Corinna Vinschen @ 2021-04-29 15:05 UTC (permalink / raw)
  To: cygwin-developers

On Apr 29 10:38, Ken Brown wrote:
> On 4/29/2021 7:05 AM, Corinna Vinschen wrote:
> > On Apr 27 11:47, Ken Brown wrote:
> > > I'm willing to start working on the switch to native AF_UNIX sockets.  (I'm
> > > frankly getting bored with working on the pipe implementation, and this
> >            ^^^^^^^^^^^^^
> > I not really surprised, Windows pipe semantics are annoying.
> > 
> > > doesn't really seem like it has much of a future.)  But I'd like to be
> > > confident that there's a good solution to the datagram problem before I
> > > invest too much time in this.
> > 
> > Summary of our short discussion on IRC:
> > 
> > - Switching to SOCK_STREAM under the hood adds the necessary reliabilty
> >    but breaks DGRAM message boundaries.
> > 
> > - There appears to be no way in Winsock to handle send buffer overflow
> >    gracefully so that user space knows that messages have been discarded.
> >    Strange enoug there's a SIO_ENABLE_CIRCULAR_QUEUEING ioctl, but that
> >    just makes things worse, by dropping older messages in favor of the
> >    newer ones :-P
> > 
> > I think it should be possible to switch to STREAM sockets to emulate
> > DGRAM semantics.  Our advantage is that this is all local.  For all
> > practical purposes there's no chance data gets really lost.  Windows has
> > an almost indefinite send buffer.
> > 
> > If you look at the STREAM as a kind of tunneling layer for getting DGRAM
> > messages over the (local) line, the DGRAM content could simply be
> > encapsulated in a tunnel packet or frame, basically the same way the
> > new, boring AF_UNIX code does it.  A DGRAM message encapsulated in a
> > STREAM message always has a header which at least contains the length of
> > the actual DGRAM message.  So when the peer reads from the socket, it
> > always only reads the header until it's complete.  Then it knows how
> > much payload is expected and then it reads until the payload has been
> > received.
> 
> This should work.  We could even use MSG_PEEK to read the header and then
> MSG_WAITALL to read the whole packet.
> 
> I'd be happy to try to implement this.  Do you want to create a branch
> (maybe topic/dgram or something like that) for working on it?

You can create topic branches as you see fit, don't worry about it.

> > Ultimately this would even allow to emulate DGRAMs when using native
> > Windows AF_UNIX sockets.  Then we'd just have to keep the old code for
> > backward compat.
> 
> Yep.
> 
> > There's just one problem with this entire switch to non-pipes: Sending
> > descriptors between peers running under different accounts requires to
> > be able to switch the user context.  You need this if the sender is a
> > non-admin account to call ImpersonateNamedPipeClient in the receiver.
> > So we might need to keep the pipes even if just for the purpose of being
> > able to call ImpersonateNamedPipeClient...
> > 
> > 
> > Thoughts?
> 
> Sounds great.  Thanks.

Don't start just yet.

I'm still not quite sure if that's really the way to go.  As I see it we
still have something to discuss here.

For one thing, using native AF_UNIX sockets will split our user base
into two.  Those who are not using a recent enough Windows will get the
old code and no descriptor passing.  However, if an application has been
built with descriptor passing, it won't work for those running older
Windows versions.  I don't think we want that for the distro, or, do we?

Next problem... implementing actual STREAM sockets.  Even using native
AF_UNIX sockets, these, too, would have to encapsulate the actual
payload because of the ancilliary data we want to send with them.
Whether or not we use native AF_UNIX sockets, they won't be compatible
with native applications...

So maybe we should really think hard about the alternative
implementation using POSIX message queues, I guess.  And *if* we do
that, this should be used likewise for STREAM as for DGRAM sockets, so
the code is easier to maintain.  Obvious advantage: No problem with
older OS versions.  And maybe it's even dirt easy to implement in
comparison with using other methods, because the transport mechanism
is already in place.

What's missing is the ImpersonateNamedPipeClient stuff (but that's not
different from using native AF_UNIX) and reflections about the permission
handling.


Corinna


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-04-29 15:05     ` Corinna Vinschen
@ 2021-04-29 15:18       ` Corinna Vinschen
  2021-04-29 16:44       ` Ken Brown
  1 sibling, 0 replies; 26+ messages in thread
From: Corinna Vinschen @ 2021-04-29 15:18 UTC (permalink / raw)
  To: cygwin-developers

On Apr 29 17:05, Corinna Vinschen wrote:
> On Apr 29 10:38, Ken Brown wrote:
> > Sounds great.  Thanks.
> 
> Don't start just yet.
> 
> I'm still not quite sure if that's really the way to go.  As I see it we
> still have something to discuss here.
> 
> For one thing, using native AF_UNIX sockets will split our user base
> into two.  Those who are not using a recent enough Windows will get the
> old code and no descriptor passing.  However, if an application has been
> built with descriptor passing, it won't work for those running older
> Windows versions.  I don't think we want that for the distro, or, do we?
> 
> Next problem... implementing actual STREAM sockets.  Even using native
> AF_UNIX sockets, these, too, would have to encapsulate the actual
> payload because of the ancilliary data we want to send with them.
> Whether or not we use native AF_UNIX sockets, they won't be compatible
> with native applications...

While searching the net I found this additional gem of information: 

Native AF_UNIX sockets don't support abstract sockets.  You must bind to
a valid path, so you always have a visible file in the filesystem.
Discussed here: https://github.com/microsoft/WSL/issues/4240

We could workaround that with our POSIX unlink semantics, probably,
but it's YA downside.


Corinna

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-04-29 15:05     ` Corinna Vinschen
  2021-04-29 15:18       ` Corinna Vinschen
@ 2021-04-29 16:44       ` Ken Brown
  2021-04-29 17:39         ` Corinna Vinschen
  1 sibling, 1 reply; 26+ messages in thread
From: Ken Brown @ 2021-04-29 16:44 UTC (permalink / raw)
  To: cygwin-developers

On 4/29/2021 11:05 AM, Corinna Vinschen wrote:
> On Apr 29 10:38, Ken Brown wrote:
>> On 4/29/2021 7:05 AM, Corinna Vinschen wrote:
>>> On Apr 27 11:47, Ken Brown wrote:
>>>> I'm willing to start working on the switch to native AF_UNIX sockets.  (I'm
>>>> frankly getting bored with working on the pipe implementation, and this
>>>             ^^^^^^^^^^^^^
>>> I not really surprised, Windows pipe semantics are annoying.
>>>
>>>> doesn't really seem like it has much of a future.)  But I'd like to be
>>>> confident that there's a good solution to the datagram problem before I
>>>> invest too much time in this.
>>>
>>> Summary of our short discussion on IRC:
>>>
>>> - Switching to SOCK_STREAM under the hood adds the necessary reliabilty
>>>     but breaks DGRAM message boundaries.
>>>
>>> - There appears to be no way in Winsock to handle send buffer overflow
>>>     gracefully so that user space knows that messages have been discarded.
>>>     Strange enoug there's a SIO_ENABLE_CIRCULAR_QUEUEING ioctl, but that
>>>     just makes things worse, by dropping older messages in favor of the
>>>     newer ones :-P
>>>
>>> I think it should be possible to switch to STREAM sockets to emulate
>>> DGRAM semantics.  Our advantage is that this is all local.  For all
>>> practical purposes there's no chance data gets really lost.  Windows has
>>> an almost indefinite send buffer.
>>>
>>> If you look at the STREAM as a kind of tunneling layer for getting DGRAM
>>> messages over the (local) line, the DGRAM content could simply be
>>> encapsulated in a tunnel packet or frame, basically the same way the
>>> new, boring AF_UNIX code does it.  A DGRAM message encapsulated in a
>>> STREAM message always has a header which at least contains the length of
>>> the actual DGRAM message.  So when the peer reads from the socket, it
>>> always only reads the header until it's complete.  Then it knows how
>>> much payload is expected and then it reads until the payload has been
>>> received.
>>
>> This should work.  We could even use MSG_PEEK to read the header and then
>> MSG_WAITALL to read the whole packet.
>>
>> I'd be happy to try to implement this.  Do you want to create a branch
>> (maybe topic/dgram or something like that) for working on it?
> 
> You can create topic branches as you see fit, don't worry about it.
> 
>>> Ultimately this would even allow to emulate DGRAMs when using native
>>> Windows AF_UNIX sockets.  Then we'd just have to keep the old code for
>>> backward compat.
>>
>> Yep.
>>
>>> There's just one problem with this entire switch to non-pipes: Sending
>>> descriptors between peers running under different accounts requires to
>>> be able to switch the user context.  You need this if the sender is a
>>> non-admin account to call ImpersonateNamedPipeClient in the receiver.
>>> So we might need to keep the pipes even if just for the purpose of being
>>> able to call ImpersonateNamedPipeClient...
>>>
>>>
>>> Thoughts?
>>
>> Sounds great.  Thanks.
> 
> Don't start just yet.
> 
> I'm still not quite sure if that's really the way to go.  As I see it we
> still have something to discuss here.
> 
> For one thing, using native AF_UNIX sockets will split our user base
> into two.  Those who are not using a recent enough Windows will get the
> old code and no descriptor passing.  However, if an application has been
> built with descriptor passing, it won't work for those running older
> Windows versions.  I don't think we want that for the distro, or, do we?

Good point.  Sounds like a nightmare.

> Next problem... implementing actual STREAM sockets.  Even using native
> AF_UNIX sockets, these, too, would have to encapsulate the actual
> payload because of the ancilliary data we want to send with them.
> Whether or not we use native AF_UNIX sockets, they won't be compatible
> with native applications...
> 
> So maybe we should really think hard about the alternative
> implementation using POSIX message queues, I guess.  And *if* we do
> that, this should be used likewise for STREAM as for DGRAM sockets, so
> the code is easier to maintain.  Obvious advantage: No problem with
> older OS versions.  And maybe it's even dirt easy to implement in
> comparison with using other methods, because the transport mechanism
> is already in place.

Yes, I don't think it should be too hard.  The one thing I can think of that's 
missing is a facility for doing a partial read of a message on the message 
queue.  (This would be needed for a recv call on a STREAM socket, in which the 
buffer is smaller than the payload of the next message on the queue.)  But this 
should be straightforward to implement.

Alternatively, I guess we could read the whole message and store the excess in a 
readahead buffer.

> What's missing is the ImpersonateNamedPipeClient stuff (but that's not
> different from using native AF_UNIX) and reflections about the permission
> handling.

On 4/29/2021 11:18 AM, Corinna Vinschen wrote:
 > While searching the net I found this additional gem of information:
 >
 > Native AF_UNIX sockets don't support abstract sockets.  You must bind to
 > a valid path, so you always have a visible file in the filesystem.
 > Discussed here: https://github.com/microsoft/WSL/issues/4240
 >
 > We could workaround that with our POSIX unlink semantics, probably,
 > but it's YA downside

Agreed.  The more features that are missing from native AF_UNIX sockets, the 
less appealing they become.

Concerning abstract sockets, would we still have an issue if we used message 
queues?  Wouldn't there be a visible file under /dev/mqueue?  Or is there a way 
around that?

Ken

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-04-29 16:44       ` Ken Brown
@ 2021-04-29 17:39         ` Corinna Vinschen
  2021-05-01 21:41           ` Ken Brown
  0 siblings, 1 reply; 26+ messages in thread
From: Corinna Vinschen @ 2021-04-29 17:39 UTC (permalink / raw)
  To: cygwin-developers

On Apr 29 12:44, Ken Brown wrote:
> On 4/29/2021 11:05 AM, Corinna Vinschen wrote:
> > So maybe we should really think hard about the alternative
> > implementation using POSIX message queues, I guess.  And *if* we do
> > that, this should be used likewise for STREAM as for DGRAM sockets, so
> > the code is easier to maintain.  Obvious advantage: No problem with
> > older OS versions.  And maybe it's even dirt easy to implement in
> > comparison with using other methods, because the transport mechanism
> > is already in place.
> 
> Yes, I don't think it should be too hard.  The one thing I can think of
> that's missing is a facility for doing a partial read of a message on the
> message queue.  (This would be needed for a recv call on a STREAM socket, in
> which the buffer is smaller than the payload of the next message on the
> queue.)  But this should be straightforward to implement.
> 
> Alternatively, I guess we could read the whole message and store the excess
> in a readahead buffer.

Alternatively, we could introduce a new, internal-only method into the
POSIX msq code, one that reads a partial message, reduces the message
to the remainder and keeps it on the queue head...

> On 4/29/2021 11:18 AM, Corinna Vinschen wrote:
> > While searching the net I found this additional gem of information:
> >
> > Native AF_UNIX sockets don't support abstract sockets.  You must bind to
> > a valid path, so you always have a visible file in the filesystem.
> > Discussed here: https://github.com/microsoft/WSL/issues/4240
> >
> > We could workaround that with our POSIX unlink semantics, probably,
> > but it's YA downside
> 
> Agreed.  The more features that are missing from native AF_UNIX sockets, the
> less appealing they become.
> 
> Concerning abstract sockets, would we still have an issue if we used message
> queues?  Wouldn't there be a visible file under /dev/mqueue?  Or is there a
> way around that?

Good point!  There's no way around that yet.  In theory that shouldn't
matter because /dev/mqueue is kind of a "virtual" path, even if Cygwin
implements the queues as real files.  But that's setting the perspective
straight, we're in fact no better than the native AF_UNIX here ¯\_(ツ)_/¯

Probably we should actually add an internal-only way of creating
non-file backed mqueues for the purpose of adding abstract sockets.


Corinna

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-04-29 17:39         ` Corinna Vinschen
@ 2021-05-01 21:41           ` Ken Brown
  2021-05-03 10:30             ` Corinna Vinschen
  0 siblings, 1 reply; 26+ messages in thread
From: Ken Brown @ 2021-05-01 21:41 UTC (permalink / raw)
  To: cygwin-developers

On 4/29/2021 1:39 PM, Corinna Vinschen wrote:
> On Apr 29 12:44, Ken Brown wrote:
>> On 4/29/2021 11:05 AM, Corinna Vinschen wrote:
>>> So maybe we should really think hard about the alternative
>>> implementation using POSIX message queues, I guess.  And *if* we do
>>> that, this should be used likewise for STREAM as for DGRAM sockets, so
>>> the code is easier to maintain.  Obvious advantage: No problem with
>>> older OS versions.  And maybe it's even dirt easy to implement in
>>> comparison with using other methods, because the transport mechanism
>>> is already in place.
>>
>> Yes, I don't think it should be too hard.  The one thing I can think of
>> that's missing is a facility for doing a partial read of a message on the
>> message queue.  (This would be needed for a recv call on a STREAM socket, in
>> which the buffer is smaller than the payload of the next message on the
>> queue.)  But this should be straightforward to implement.
>>
>> Alternatively, I guess we could read the whole message and store the excess
>> in a readahead buffer.
> 
> Alternatively, we could introduce a new, internal-only method into the
> POSIX msq code, one that reads a partial message, reduces the message
> to the remainder and keeps it on the queue head...
> 
>> On 4/29/2021 11:18 AM, Corinna Vinschen wrote:
>>> While searching the net I found this additional gem of information:
>>>
>>> Native AF_UNIX sockets don't support abstract sockets.  You must bind to
>>> a valid path, so you always have a visible file in the filesystem.
>>> Discussed here: https://github.com/microsoft/WSL/issues/4240
>>>
>>> We could workaround that with our POSIX unlink semantics, probably,
>>> but it's YA downside
>>
>> Agreed.  The more features that are missing from native AF_UNIX sockets, the
>> less appealing they become.
>>
>> Concerning abstract sockets, would we still have an issue if we used message
>> queues?  Wouldn't there be a visible file under /dev/mqueue?  Or is there a
>> way around that?
> 
> Good point!  There's no way around that yet.  In theory that shouldn't
> matter because /dev/mqueue is kind of a "virtual" path, even if Cygwin
> implements the queues as real files.  But that's setting the perspective
> straight, we're in fact no better than the native AF_UNIX here ¯\_(ツ)_/¯
> 
> Probably we should actually add an internal-only way of creating
> non-file backed mqueues for the purpose of adding abstract sockets.

I've been thinking about the overall design of using mqueues instead of pipes, 
and I just want to make sure I'm on the right track.  Here are my thoughts:

1. Each socket needs to create its own mqueue that it uses only for reading. 
For writing, it opens its peer's mqueue.  So each socket holds two mqueue 
descriptors, one for reading and one for writing.

2. A STREAM socket S that wants to connect to a listening socket T sends a 
message to T containing S's mqueue name.  (Probably it's sufficient for S to 
send its unique ID, from which the mqueue name will be constructed.)  T then 
creates a socket T1, which sends its mqueue name (or ID) to S, and S and T1 are 
then connected.  In the async case, maybe S uses mq_notify to set up the thread 
that waits for a connection.

3. In fhandler_socket_unix::dup, the child will need to open any mqueues that 
the parent holds open.  Maybe an internal _mq_dup function would be useful here.

4. I'm not sure what needs to be done after fork/exec.  After an exec, all 
mqueue descriptors are automatically closed according to Kerrisk, but I don't 
see where this is done in the Cygwin code.  Or is it somehow automatic as a 
consequence of the mqueue implementation (which I haven't studied in detail)? 
On the other hand, why does Cygwin's mq_open accept O_CLOEXEC if this is the case?

And after a fork, something might need to be done to make sure that the child 
can set the blocking mode of its inherited mqueue descriptors independently of 
the parent.  If I understand the mqueue documentation correctly, this isn't 
normally the case.  In the terminology of Kerrisk, the mqueue descriptor that 
the child inherits from the parent refers to the same mqueue description as the 
parent's descriptor, and the blocking mode is part of the description.  But 
again, this might be Linux terminology that doesn't apply to Cygwin.

That's all I have for the moment, but I'm sure there will be more questions when 
I actually start coding.

Ken

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-01 21:41           ` Ken Brown
@ 2021-05-03 10:30             ` Corinna Vinschen
  2021-05-03 15:45               ` Corinna Vinschen
  0 siblings, 1 reply; 26+ messages in thread
From: Corinna Vinschen @ 2021-05-03 10:30 UTC (permalink / raw)
  To: cygwin-developers

Hi Ken,

On May  1 17:41, Ken Brown wrote:
> I've been thinking about the overall design of using mqueues instead of
> pipes, and I just want to make sure I'm on the right track.  Here are my
> thoughts:
> 
> 1. Each socket needs to create its own mqueue that it uses only for reading.
> For writing, it opens its peer's mqueue.  So each socket holds two mqueue
> descriptors, one for reading and one for writing.

Sounds right to me.

> 2. A STREAM socket S that wants to connect to a listening socket T sends a
> message to T containing S's mqueue name.  (Probably it's sufficient for S to
> send its unique ID, from which the mqueue name will be constructed.)  T then
> creates a socket T1, which sends its mqueue name (or ID) to S, and S and T1
> are then connected.  In the async case, maybe S uses mq_notify to set up the
> thread that waits for a connection.

Sounds good as well.  Maybe it's better to look at this from the
listener side in the first place, because that's the more tricky side,
but that's just a POV thingy.

> 3. In fhandler_socket_unix::dup, the child will need to open any mqueues
> that the parent holds open.  Maybe an internal _mq_dup function would be
> useful here.

Makes sense.

> 4. I'm not sure what needs to be done after fork/exec.  After an exec, all

Same here, see below.

> mqueue descriptors are automatically closed according to Kerrisk, but I
> don't see where this is done in the Cygwin code.  Or is it somehow automatic
> as a consequence of the mqueue implementation (which I haven't studied in
> detail)?

Yes, that's automatic.  The handles are duped, the addresses are either
on the heap or in an mmap, those are duplicated automaticelly during
fork.  The file descriptor for the mmap'ed file gets closed right during
mq_open, so it's not inherited at all, and memory isn't inherited by an
exec'ed child.  But, see below (Note 2).

> On the other hand, why does Cygwin's mq_open accept O_CLOEXEC if
> this is the case?

The mq code doesn't handle incoming O_CLOEXEC explicitely, it just lets
open flags slip through.  I don't know what Linux' idea here is, but for
our implementation O_CLOEXEC has no meaning because the open flags other
than O_NONBLOCK are only used in the open(2) call for the mapped file,
and that uses O_CLOEXEC anyway.

> And after a fork, something might need to be done to make sure that the
> child can set the blocking mode of its inherited mqueue descriptors
> independently of the parent.  If I understand the mqueue documentation
> correctly, this isn't normally the case.  In the terminology of Kerrisk, the
> mqueue descriptor that the child inherits from the parent refers to the same
> mqueue description as the parent's descriptor, and the blocking mode is part
> of the description.  But again, this might be Linux terminology that doesn't
> apply to Cygwin.

Doesn't apply to Cygwin.  The structure representing the mqd_t, mq_info,
is used to keep track of the O_NONBLOCK flag, not the mqueue header.  So
the flag is local only.

> That's all I have for the moment, but I'm sure there will be more questions
> when I actually start coding.

Certainly.  As for the above "see below"s... I encountered a couple of
problems over the weekend myself (during soccer viewing, which I don't
care for at all), which all need either fixing, or have to be
implemented first.

1. As you noticed, the socket descriptors are inherited by exec'ed
   children, but the mqueue isn't.  So we need at least some kind of
   fixup_after_exec for mqueues used as part of AF_UNIX sockets.

2. While none of the mqueue structures are propagated to child
   processes, the handles to the synchronization objects accidentally
   are.

3. Note 1 and 2 can only be implemented, if we introduce a new
   superstructure keeping track of all mdq_t/mq_info structure
   pointers in an application.  Oh well.  Bummer, I was SOO happy
   that the posix_ipc stuff didn't need it yet...

4. As stated in the code comment leading the mqueue implementation,
   I used Stevens code as the basis.  What I didn't realize so far is
   that Stevens simplified the implementation in some ways.  The code
   works for real POSIX mqueues, but needs some more fixing before it
   can be used for AF_UNIX at all.

5. I hacked a bit on an mq-only mmap call, which is supposed to allow
   creating/opening of named shared memeory areas, but that's a tricky
   extension to the mmap scenario.  I have a gut feeling that it's
   better to avoid using mmap at all and use Windows section mapping
   directly in mq_open/mq_close, especially if we have to implement
   fixup_after_exec semantics anyway.

6. Ultimately, AF_UNIX sockets should not run file-backed at all,
   anyway.  Given that sockets can't be bound multiple times, there's
   no persistency requirement for the mqueue.

7. ...?  Not sure if I forgot something here, but the above problems
   are quite enough to spend some time on already...


Corinna

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-03 10:30             ` Corinna Vinschen
@ 2021-05-03 15:45               ` Corinna Vinschen
  2021-05-03 16:56                 ` Ken Brown
  0 siblings, 1 reply; 26+ messages in thread
From: Corinna Vinschen @ 2021-05-03 15:45 UTC (permalink / raw)
  To: cygwin-developers

On May  3 12:30, Corinna Vinschen wrote:
> 1. As you noticed, the socket descriptors are inherited by exec'ed
>    children, but the mqueue isn't.  So we need at least some kind of
>    fixup_after_exec for mqueues used as part of AF_UNIX sockets.
> 
> 2. While none of the mqueue structures are propagated to child
>    processes, the handles to the synchronization objects accidentally
>    are.
> 
> 3. Note 1 and 2 can only be implemented, if we introduce a new
>    superstructure keeping track of all mdq_t/mq_info structure
>    pointers in an application.  Oh well.  Bummer, I was SOO happy
>    that the posix_ipc stuff didn't need it yet...
> 
> 4. As stated in the code comment leading the mqueue implementation,
>    I used Stevens code as the basis.  What I didn't realize so far is
>    that Stevens simplified the implementation in some ways.  The code
>    works for real POSIX mqueues, but needs some more fixing before it
>    can be used for AF_UNIX at all.
> 
> 5. I hacked a bit on an mq-only mmap call, which is supposed to allow
>    creating/opening of named shared memeory areas, but that's a tricky
>    extension to the mmap scenario.  I have a gut feeling that it's
>    better to avoid using mmap at all and use Windows section mapping
>    directly in mq_open/mq_close, especially if we have to implement
>    fixup_after_exec semantics anyway.
> 
> 6. Ultimately, AF_UNIX sockets should not run file-backed at all,
>    anyway.  Given that sockets can't be bound multiple times, there's
>    no persistency requirement for the mqueue.

Got it:

7. The idea of _mq_recv partial reads is entirely broken.  Given that
   the information in the queue consists of header info plus payload,
   the entire block has to be read, and then a new block with fixed
   header and shortened payload has to be rewritten with bumped priority.
   This in turn can only be performed by the AF_UNIX code, unless we
   expect knowledge of the AF_UNIX packet layout in the mqueue code.


Corinna

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-03 15:45               ` Corinna Vinschen
@ 2021-05-03 16:56                 ` Ken Brown
  2021-05-03 18:40                   ` Corinna Vinschen
  0 siblings, 1 reply; 26+ messages in thread
From: Ken Brown @ 2021-05-03 16:56 UTC (permalink / raw)
  To: cygwin-developers

On 5/3/2021 11:45 AM, Corinna Vinschen wrote:
> On May  3 12:30, Corinna Vinschen wrote:
>> 1. As you noticed, the socket descriptors are inherited by exec'ed
>>     children, but the mqueue isn't.  So we need at least some kind of
>>     fixup_after_exec for mqueues used as part of AF_UNIX sockets.
>>
>> 2. While none of the mqueue structures are propagated to child
>>     processes, the handles to the synchronization objects accidentally
>>     are.
>>
>> 3. Note 1 and 2 can only be implemented, if we introduce a new
>>     superstructure keeping track of all mdq_t/mq_info structure
>>     pointers in an application.  Oh well.  Bummer, I was SOO happy
>>     that the posix_ipc stuff didn't need it yet...
>>
>> 4. As stated in the code comment leading the mqueue implementation,
>>     I used Stevens code as the basis.  What I didn't realize so far is
>>     that Stevens simplified the implementation in some ways.  The code
>>     works for real POSIX mqueues, but needs some more fixing before it
>>     can be used for AF_UNIX at all.
>>
>> 5. I hacked a bit on an mq-only mmap call, which is supposed to allow
>>     creating/opening of named shared memeory areas, but that's a tricky
>>     extension to the mmap scenario.  I have a gut feeling that it's
>>     better to avoid using mmap at all and use Windows section mapping
>>     directly in mq_open/mq_close, especially if we have to implement
>>     fixup_after_exec semantics anyway.
>>
>> 6. Ultimately, AF_UNIX sockets should not run file-backed at all,
>>     anyway.  Given that sockets can't be bound multiple times, there's
>>     no persistency requirement for the mqueue.
> 
> Got it:
> 
> 7. The idea of _mq_recv partial reads is entirely broken.  Given that
>     the information in the queue consists of header info plus payload,
>     the entire block has to be read, and then a new block with fixed
>     header and shortened payload has to be rewritten with bumped priority.
>     This in turn can only be performed by the AF_UNIX code, unless we
>     expect knowledge of the AF_UNIX packet layout in the mqueue code.

The partial read is actually OK as is, since it's comparable to what happens on 
a partial read from a pipe.  I already have AF_UNIX code (on the topic/af_unix 
branch) that deals with that.  A boolean variable _unread keeps track of whether 
there's unread data from a previous partial read.  If so, the next read just 
reads data without expecting a header.

Ken

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-03 16:56                 ` Ken Brown
@ 2021-05-03 18:40                   ` Corinna Vinschen
  2021-05-03 19:48                     ` Ken Brown
  0 siblings, 1 reply; 26+ messages in thread
From: Corinna Vinschen @ 2021-05-03 18:40 UTC (permalink / raw)
  To: cygwin-developers

On May  3 12:56, Ken Brown wrote:
> On 5/3/2021 11:45 AM, Corinna Vinschen wrote:
> > 7. The idea of _mq_recv partial reads is entirely broken.  Given that
> >     the information in the queue consists of header info plus payload,
> >     the entire block has to be read, and then a new block with fixed
> >     header and shortened payload has to be rewritten with bumped priority.
> >     This in turn can only be performed by the AF_UNIX code, unless we
> >     expect knowledge of the AF_UNIX packet layout in the mqueue code.
> 
> The partial read is actually OK as is, since it's comparable to what happens
> on a partial read from a pipe.  I already have AF_UNIX code (on the
> topic/af_unix branch) that deals with that.  A boolean variable _unread
> keeps track of whether there's unread data from a previous partial read.  If
> so, the next read just reads data without expecting a header.

Ok, never mind.

One advantage of the mqueue when utilized as above would be that this
kind of state info is not required.  The content of a packet would
always be self-contained and bumping the priority would automagically
move the packet content to the top of the queue.  But that's just
idle musing at this point.


Corinna

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-03 18:40                   ` Corinna Vinschen
@ 2021-05-03 19:48                     ` Ken Brown
  2021-05-03 20:50                       ` Ken Brown
  0 siblings, 1 reply; 26+ messages in thread
From: Ken Brown @ 2021-05-03 19:48 UTC (permalink / raw)
  To: cygwin-developers

On 5/3/2021 2:40 PM, Corinna Vinschen wrote:
> On May  3 12:56, Ken Brown wrote:
>> On 5/3/2021 11:45 AM, Corinna Vinschen wrote:
>>> 7. The idea of _mq_recv partial reads is entirely broken.  Given that
>>>      the information in the queue consists of header info plus payload,
>>>      the entire block has to be read, and then a new block with fixed
>>>      header and shortened payload has to be rewritten with bumped priority.
>>>      This in turn can only be performed by the AF_UNIX code, unless we
>>>      expect knowledge of the AF_UNIX packet layout in the mqueue code.
>>
>> The partial read is actually OK as is, since it's comparable to what happens
>> on a partial read from a pipe.  I already have AF_UNIX code (on the
>> topic/af_unix branch) that deals with that.  A boolean variable _unread
>> keeps track of whether there's unread data from a previous partial read.  If
>> so, the next read just reads data without expecting a header.
> 
> Ok, never mind.
> 
> One advantage of the mqueue when utilized as above would be that this
> kind of state info is not required.  The content of a packet would
> always be self-contained and bumping the priority would automagically
> move the packet content to the top of the queue.  But that's just
> idle musing at this point.

I thought about that but rejected it for the following reason: Suppose the 
receiver reads a message and tries to rewrite it with modified header, shortened 
payload, and bumped priority.  The sender might have already written more 
messages between the read and the write, and the queue could be full.

Now that I'm rethinking this, however, maybe we could get around that problem 
with an internal _mq_lock function that would block senders while the receiver 
decides whether it needs to do a partial read.

Ken

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-03 19:48                     ` Ken Brown
@ 2021-05-03 20:50                       ` Ken Brown
  2021-05-04 11:06                         ` Corinna Vinschen
  0 siblings, 1 reply; 26+ messages in thread
From: Ken Brown @ 2021-05-03 20:50 UTC (permalink / raw)
  To: cygwin-developers

On 5/3/2021 3:48 PM, Ken Brown wrote:
> On 5/3/2021 2:40 PM, Corinna Vinschen wrote:
>> On May  3 12:56, Ken Brown wrote:
>>> On 5/3/2021 11:45 AM, Corinna Vinschen wrote:
>>>> 7. The idea of _mq_recv partial reads is entirely broken.  Given that
>>>>      the information in the queue consists of header info plus payload,
>>>>      the entire block has to be read, and then a new block with fixed
>>>>      header and shortened payload has to be rewritten with bumped priority.
>>>>      This in turn can only be performed by the AF_UNIX code, unless we
>>>>      expect knowledge of the AF_UNIX packet layout in the mqueue code.
>>>
>>> The partial read is actually OK as is, since it's comparable to what happens
>>> on a partial read from a pipe.  I already have AF_UNIX code (on the
>>> topic/af_unix branch) that deals with that.  A boolean variable _unread
>>> keeps track of whether there's unread data from a previous partial read.  If
>>> so, the next read just reads data without expecting a header.
>>
>> Ok, never mind.
>>
>> One advantage of the mqueue when utilized as above would be that this
>> kind of state info is not required.  The content of a packet would
>> always be self-contained and bumping the priority would automagically
>> move the packet content to the top of the queue.  But that's just
>> idle musing at this point.
> 
> I thought about that but rejected it for the following reason: Suppose the 
> receiver reads a message and tries to rewrite it with modified header, shortened 
> payload, and bumped priority.  The sender might have already written more 
> messages between the read and the write, and the queue could be full.
> 
> Now that I'm rethinking this, however, maybe we could get around that problem 
> with an internal _mq_lock function that would block senders while the receiver 
> decides whether it needs to do a partial read.

Alternatively, _mq_recv could accept an _MQ_LOCK flag, which means "don't 
release the mutex", and then there could be an _mq_unlock function, which simply 
releases the mutex.

Ken

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-03 20:50                       ` Ken Brown
@ 2021-05-04 11:06                         ` Corinna Vinschen
  2021-05-13 14:30                           ` Ken Brown
  0 siblings, 1 reply; 26+ messages in thread
From: Corinna Vinschen @ 2021-05-04 11:06 UTC (permalink / raw)
  To: cygwin-developers

On May  3 16:50, Ken Brown wrote:
> On 5/3/2021 3:48 PM, Ken Brown wrote:
> > On 5/3/2021 2:40 PM, Corinna Vinschen wrote:
> > > On May  3 12:56, Ken Brown wrote:
> > > > On 5/3/2021 11:45 AM, Corinna Vinschen wrote:
> > > > > 7. The idea of _mq_recv partial reads is entirely broken.  Given that
> > > > >      the information in the queue consists of header info plus payload,
> > > > >      the entire block has to be read, and then a new block with fixed
> > > > >      header and shortened payload has to be rewritten with bumped priority.
> > > > >      This in turn can only be performed by the AF_UNIX code, unless we
> > > > >      expect knowledge of the AF_UNIX packet layout in the mqueue code.
> > > > 
> > > > The partial read is actually OK as is, since it's comparable to what happens
> > > > on a partial read from a pipe.  I already have AF_UNIX code (on the
> > > > topic/af_unix branch) that deals with that.  A boolean variable _unread
> > > > keeps track of whether there's unread data from a previous partial read.  If
> > > > so, the next read just reads data without expecting a header.
> > > 
> > > Ok, never mind.
> > > 
> > > One advantage of the mqueue when utilized as above would be that this
> > > kind of state info is not required.  The content of a packet would
> > > always be self-contained and bumping the priority would automagically
> > > move the packet content to the top of the queue.  But that's just
> > > idle musing at this point.
> > 
> > I thought about that but rejected it for the following reason: Suppose
> > the receiver reads a message and tries to rewrite it with modified
> > header, shortened payload, and bumped priority.  The sender might have
> > already written more messages between the read and the write, and the
> > queue could be full.
> > 
> > Now that I'm rethinking this, however, maybe we could get around that
> > problem with an internal _mq_lock function that would block senders
> > while the receiver decides whether it needs to do a partial read.
> 
> Alternatively, _mq_recv could accept an _MQ_LOCK flag, which means "don't
> release the mutex", and then there could be an _mq_unlock function, which
> simply releases the mutex.

That's an idea.  However, I think this is something we can push back for
now, and ultimately we can use any of the above solutions which makes most
sense.  Implemanting a defered unlock if required is not much of a problem.


Corinna

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-04 11:06                         ` Corinna Vinschen
@ 2021-05-13 14:30                           ` Ken Brown
  2021-05-17 10:26                             ` Corinna Vinschen
  0 siblings, 1 reply; 26+ messages in thread
From: Ken Brown @ 2021-05-13 14:30 UTC (permalink / raw)
  To: cygwin-developers

On 5/4/2021 7:06 AM, Corinna Vinschen wrote:
> On May  3 16:50, Ken Brown wrote:
>> On 5/3/2021 3:48 PM, Ken Brown wrote:
>>> On 5/3/2021 2:40 PM, Corinna Vinschen wrote:
>>>> On May  3 12:56, Ken Brown wrote:
>>>>> On 5/3/2021 11:45 AM, Corinna Vinschen wrote:
>>>>>> 7. The idea of _mq_recv partial reads is entirely broken.  Given that
>>>>>>       the information in the queue consists of header info plus payload,
>>>>>>       the entire block has to be read, and then a new block with fixed
>>>>>>       header and shortened payload has to be rewritten with bumped priority.
>>>>>>       This in turn can only be performed by the AF_UNIX code, unless we
>>>>>>       expect knowledge of the AF_UNIX packet layout in the mqueue code.
>>>>>
>>>>> The partial read is actually OK as is, since it's comparable to what happens
>>>>> on a partial read from a pipe.  I already have AF_UNIX code (on the
>>>>> topic/af_unix branch) that deals with that.  A boolean variable _unread
>>>>> keeps track of whether there's unread data from a previous partial read.  If
>>>>> so, the next read just reads data without expecting a header.
>>>>
>>>> Ok, never mind.
>>>>
>>>> One advantage of the mqueue when utilized as above would be that this
>>>> kind of state info is not required.  The content of a packet would
>>>> always be self-contained and bumping the priority would automagically
>>>> move the packet content to the top of the queue.  But that's just
>>>> idle musing at this point.
>>>
>>> I thought about that but rejected it for the following reason: Suppose
>>> the receiver reads a message and tries to rewrite it with modified
>>> header, shortened payload, and bumped priority.  The sender might have
>>> already written more messages between the read and the write, and the
>>> queue could be full.
>>>
>>> Now that I'm rethinking this, however, maybe we could get around that
>>> problem with an internal _mq_lock function that would block senders
>>> while the receiver decides whether it needs to do a partial read.
>>
>> Alternatively, _mq_recv could accept an _MQ_LOCK flag, which means "don't
>> release the mutex", and then there could be an _mq_unlock function, which
>> simply releases the mutex.
> 
> That's an idea.  However, I think this is something we can push back for
> now, and ultimately we can use any of the above solutions which makes most
> sense.  Implemanting a defered unlock if required is not much of a problem.

I've begun working on an mqueue-based implementation of AF_UNIX sockets, on a 
new topic/af_unix_mq branch.  I've implemented accept and connect and lightly 
tested them, and I've implemented sendmsg (not yet tested).  I'll start on 
recvmsg next, and then I'll be able to test sending/receiving.

While working on sendmsg, I thought of another useful thing that the mqueue code 
could provide: an internal _mq_send function that returns EPIPE if no one has 
the mqueue open for reading.  This plays the role of the STATUS_PIPE_IS_CLOSED 
macro in the pipe implementation.

Ken

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-13 14:30                           ` Ken Brown
@ 2021-05-17 10:26                             ` Corinna Vinschen
  2021-05-17 13:02                               ` Ken Brown
  2021-05-17 13:02                               ` Ken Brown
  0 siblings, 2 replies; 26+ messages in thread
From: Corinna Vinschen @ 2021-05-17 10:26 UTC (permalink / raw)
  To: cygwin-developers

On May 13 10:30, Ken Brown wrote:
> While working on sendmsg, I thought of another useful thing that the mqueue
> code could provide: an internal _mq_send function that returns EPIPE if no
> one has the mqueue open for reading.  This plays the role of the
> STATUS_PIPE_IS_CLOSED macro in the pipe implementation.

I don't see how to do that.  The code is written in a way not requiring
to keep track of stuff at runtime.  All shared info is part of the mmap
so far.  We don't know if any other mqd_t descriptor is open at the time,
it's not even a descriptor in the usual sense, just a pointer to a private
malloc'ed area.  Keeping track of open descriptors requires another
shared mem region at runtime.


Corinna

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-17 10:26                             ` Corinna Vinschen
@ 2021-05-17 13:02                               ` Ken Brown
  2021-05-17 13:02                               ` Ken Brown
  1 sibling, 0 replies; 26+ messages in thread
From: Ken Brown @ 2021-05-17 13:02 UTC (permalink / raw)
  To: cygwin-developers

On 5/17/2021 6:26 AM, Corinna Vinschen wrote:
> On May 13 10:30, Ken Brown wrote:
>> While working on sendmsg, I thought of another useful thing that the mqueue
>> code could provide: an internal _mq_send function that returns EPIPE if no
>> one has the mqueue open for reading.  This plays the role of the
>> STATUS_PIPE_IS_CLOSED macro in the pipe implementation.
> 
> I don't see how to do that.  The code is written in a way not requiring
> to keep track of stuff at runtime.  All shared info is part of the mmap
> so far.  We don't know if any other mqd_t descriptor is open at the time,
> it's not even a descriptor in the usual sense, just a pointer to a private
> malloc'ed area.  Keeping track of open descriptors requires another
> shared mem region at runtime.

OK, then I think I can handle this in the AF_UNIX shared memory.  Each socket 
can keep track of its own number of open (socket) descriptors, and then it can 
send a shutdown message to its peer when the last one is about to close.  The 
sendmsg code will have to be tweaked to repeatedly check for shutdown messages.

Ken

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-17 10:26                             ` Corinna Vinschen
  2021-05-17 13:02                               ` Ken Brown
@ 2021-05-17 13:02                               ` Ken Brown
  1 sibling, 0 replies; 26+ messages in thread
From: Ken Brown @ 2021-05-17 13:02 UTC (permalink / raw)
  To: cygwin-developers

On 5/17/2021 6:26 AM, Corinna Vinschen wrote:
> On May 13 10:30, Ken Brown wrote:
>> While working on sendmsg, I thought of another useful thing that the mqueue
>> code could provide: an internal _mq_send function that returns EPIPE if no
>> one has the mqueue open for reading.  This plays the role of the
>> STATUS_PIPE_IS_CLOSED macro in the pipe implementation.
> 
> I don't see how to do that.  The code is written in a way not requiring
> to keep track of stuff at runtime.  All shared info is part of the mmap
> so far.  We don't know if any other mqd_t descriptor is open at the time,
> it's not even a descriptor in the usual sense, just a pointer to a private
> malloc'ed area.  Keeping track of open descriptors requires another
> shared mem region at runtime.

OK, then I think I can handle this in the AF_UNIX shared memory.  Each socket 
can keep track of its own number of open (socket) descriptors, and then it can 
send a shutdown message to its peer when the last one is about to close.  The 
sendmsg code will have to be tweaked to repeatedly check for shutdown messages.

Ken

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-04-29 11:05 ` Corinna Vinschen
  2021-04-29 11:16   ` Corinna Vinschen
  2021-04-29 14:38   ` Ken Brown
@ 2021-05-20 13:46   ` Ken Brown
  2021-05-20 19:25     ` Corinna Vinschen
  2 siblings, 1 reply; 26+ messages in thread
From: Ken Brown @ 2021-05-20 13:46 UTC (permalink / raw)
  To: cygwin-developers

On 4/29/2021 7:05 AM, Corinna Vinschen wrote:
> I think it should be possible to switch to STREAM sockets to emulate
> DGRAM semantics.  Our advantage is that this is all local.  For all
> practical purposes there's no chance data gets really lost.  Windows has
> an almost indefinite send buffer.
> 
> If you look at the STREAM as a kind of tunneling layer for getting DGRAM
> messages over the (local) line, the DGRAM content could simply be
> encapsulated in a tunnel packet or frame, basically the same way the
> new, boring AF_UNIX code does it.  A DGRAM message encapsulated in a
> STREAM message always has a header which at least contains the length of
> the actual DGRAM message.  So when the peer reads from the socket, it
> always only reads the header until it's complete.  Then it knows how
> much payload is expected and then it reads until the payload has been
> received.

I think I'd like to go ahead and try to do this DGRAM emulation in the current 
(AF_LOCAL) code.  It shouldn't be too hard, and it would solve the unreliability 
problem while we look for a better way to handle AF_UNIX sockets.

Ken

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-20 13:46   ` Ken Brown
@ 2021-05-20 19:25     ` Corinna Vinschen
  2021-05-21 21:54       ` Ken Brown
  0 siblings, 1 reply; 26+ messages in thread
From: Corinna Vinschen @ 2021-05-20 19:25 UTC (permalink / raw)
  To: cygwin-developers

On May 20 09:46, Ken Brown wrote:
> On 4/29/2021 7:05 AM, Corinna Vinschen wrote:
> > I think it should be possible to switch to STREAM sockets to emulate
> > DGRAM semantics.  Our advantage is that this is all local.  For all
> > practical purposes there's no chance data gets really lost.  Windows has
> > an almost indefinite send buffer.
> > 
> > If you look at the STREAM as a kind of tunneling layer for getting DGRAM
> > messages over the (local) line, the DGRAM content could simply be
> > encapsulated in a tunnel packet or frame, basically the same way the
> > new, boring AF_UNIX code does it.  A DGRAM message encapsulated in a
> > STREAM message always has a header which at least contains the length of
> > the actual DGRAM message.  So when the peer reads from the socket, it
> > always only reads the header until it's complete.  Then it knows how
> > much payload is expected and then it reads until the payload has been
> > received.
> 
> I think I'd like to go ahead and try to do this DGRAM emulation in the
> current (AF_LOCAL) code.  It shouldn't be too hard, and it would solve the
> unreliability problem while we look for a better way to handle AF_UNIX
> sockets.

Yeah, sounds like the way to go for now.


Thanks,
Corinna

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-20 19:25     ` Corinna Vinschen
@ 2021-05-21 21:54       ` Ken Brown
  2021-05-22 15:49         ` Corinna Vinschen
  0 siblings, 1 reply; 26+ messages in thread
From: Ken Brown @ 2021-05-21 21:54 UTC (permalink / raw)
  To: cygwin-developers

On 5/20/2021 3:25 PM, Corinna Vinschen wrote:
> On May 20 09:46, Ken Brown wrote:
>> On 4/29/2021 7:05 AM, Corinna Vinschen wrote:
>>> I think it should be possible to switch to STREAM sockets to emulate
>>> DGRAM semantics.  Our advantage is that this is all local.  For all
>>> practical purposes there's no chance data gets really lost.  Windows has
>>> an almost indefinite send buffer.
>>>
>>> If you look at the STREAM as a kind of tunneling layer for getting DGRAM
>>> messages over the (local) line, the DGRAM content could simply be
>>> encapsulated in a tunnel packet or frame, basically the same way the
>>> new, boring AF_UNIX code does it.  A DGRAM message encapsulated in a
>>> STREAM message always has a header which at least contains the length of
>>> the actual DGRAM message.  So when the peer reads from the socket, it
>>> always only reads the header until it's complete.  Then it knows how
>>> much payload is expected and then it reads until the payload has been
>>> received.
>>
>> I think I'd like to go ahead and try to do this DGRAM emulation in the
>> current (AF_LOCAL) code.  It shouldn't be too hard, and it would solve the
>> unreliability problem while we look for a better way to handle AF_UNIX
>> sockets.
> 
> Yeah, sounds like the way to go for now.

Unfortunately, I ran into a problem.  Trying to emulate DGRAM sockets in STREAM 
sockets breaks the DGRAM send/recv semantics.  For example, WSARecvFrom won't 
return the source address.  I hope I'm just missing something, but I don't see a 
way around this.

Ken

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-21 21:54       ` Ken Brown
@ 2021-05-22 15:49         ` Corinna Vinschen
  2021-05-22 16:50           ` Ken Brown
  0 siblings, 1 reply; 26+ messages in thread
From: Corinna Vinschen @ 2021-05-22 15:49 UTC (permalink / raw)
  To: cygwin-developers

On May 21 17:54, Ken Brown wrote:
> On 5/20/2021 3:25 PM, Corinna Vinschen wrote:
> > On May 20 09:46, Ken Brown wrote:
> > > On 4/29/2021 7:05 AM, Corinna Vinschen wrote:
> > > > I think it should be possible to switch to STREAM sockets to emulate
> > > > DGRAM semantics.  Our advantage is that this is all local.  For all
> > > > practical purposes there's no chance data gets really lost.  Windows has
> > > > an almost indefinite send buffer.
> > > > 
> > > > If you look at the STREAM as a kind of tunneling layer for getting DGRAM
> > > > messages over the (local) line, the DGRAM content could simply be
> > > > encapsulated in a tunnel packet or frame, basically the same way the
> > > > new, boring AF_UNIX code does it.  A DGRAM message encapsulated in a
> > > > STREAM message always has a header which at least contains the length of
> > > > the actual DGRAM message.  So when the peer reads from the socket, it
> > > > always only reads the header until it's complete.  Then it knows how
> > > > much payload is expected and then it reads until the payload has been
> > > > received.
> > > 
> > > I think I'd like to go ahead and try to do this DGRAM emulation in the
> > > current (AF_LOCAL) code.  It shouldn't be too hard, and it would solve the
> > > unreliability problem while we look for a better way to handle AF_UNIX
> > > sockets.
> > 
> > Yeah, sounds like the way to go for now.
> 
> Unfortunately, I ran into a problem.  Trying to emulate DGRAM sockets in
> STREAM sockets breaks the DGRAM send/recv semantics.  For example,
> WSARecvFrom won't return the source address.

It doesn't anyway, does it?  I mean, this is entirely local and the
source address is, basically, the same socket.

> I hope I'm just missing
> something, but I don't see a way around this.

I hope I don't miss something either...


Corinna

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-22 15:49         ` Corinna Vinschen
@ 2021-05-22 16:50           ` Ken Brown
  2021-05-22 18:21             ` Ken Brown
  0 siblings, 1 reply; 26+ messages in thread
From: Ken Brown @ 2021-05-22 16:50 UTC (permalink / raw)
  To: cygwin-developers

On 5/22/2021 11:49 AM, Corinna Vinschen wrote:
> On May 21 17:54, Ken Brown wrote:
>> On 5/20/2021 3:25 PM, Corinna Vinschen wrote:
>>> On May 20 09:46, Ken Brown wrote:
>>>> On 4/29/2021 7:05 AM, Corinna Vinschen wrote:
>>>>> I think it should be possible to switch to STREAM sockets to emulate
>>>>> DGRAM semantics.  Our advantage is that this is all local.  For all
>>>>> practical purposes there's no chance data gets really lost.  Windows has
>>>>> an almost indefinite send buffer.
>>>>>
>>>>> If you look at the STREAM as a kind of tunneling layer for getting DGRAM
>>>>> messages over the (local) line, the DGRAM content could simply be
>>>>> encapsulated in a tunnel packet or frame, basically the same way the
>>>>> new, boring AF_UNIX code does it.  A DGRAM message encapsulated in a
>>>>> STREAM message always has a header which at least contains the length of
>>>>> the actual DGRAM message.  So when the peer reads from the socket, it
>>>>> always only reads the header until it's complete.  Then it knows how
>>>>> much payload is expected and then it reads until the payload has been
>>>>> received.
>>>>
>>>> I think I'd like to go ahead and try to do this DGRAM emulation in the
>>>> current (AF_LOCAL) code.  It shouldn't be too hard, and it would solve the
>>>> unreliability problem while we look for a better way to handle AF_UNIX
>>>> sockets.
>>>
>>> Yeah, sounds like the way to go for now.
>>
>> Unfortunately, I ran into a problem.  Trying to emulate DGRAM sockets in
>> STREAM sockets breaks the DGRAM send/recv semantics.  For example,
>> WSARecvFrom won't return the source address.
> 
> It doesn't anyway, does it?  I mean, this is entirely local and the
> source address is, basically, the same socket.

 From the Winsock point of view, the sending socket is an AF_INET socket, whose 
name is a struct sockaddr_in (the crucial data being the port number). 
fhandler_socket_local::recv_internal then converts the sockaddr_in of the sender 
to an abstract sockaddr_un that encodes the port number, so that the receiver 
can send back a reply.

Aside from this issue, there's also the fact that all the send/recv functions 
when applied to STREAM sockets expect the socket to be connected.  But if we're 
using STREAM sockets to emulate DGRAM sockets, they typically won't be 
connected.  (And "connected" means something different for DGRAMs anyway.)

Ken

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: The unreliability of AF_UNIX datagram sockets
  2021-05-22 16:50           ` Ken Brown
@ 2021-05-22 18:21             ` Ken Brown
  0 siblings, 0 replies; 26+ messages in thread
From: Ken Brown @ 2021-05-22 18:21 UTC (permalink / raw)
  To: cygwin-developers

On 5/22/2021 12:50 PM, Ken Brown wrote:
> On 5/22/2021 11:49 AM, Corinna Vinschen wrote:
>> On May 21 17:54, Ken Brown wrote:
>>> On 5/20/2021 3:25 PM, Corinna Vinschen wrote:
>>>> On May 20 09:46, Ken Brown wrote:
>>>>> On 4/29/2021 7:05 AM, Corinna Vinschen wrote:
>>>>>> I think it should be possible to switch to STREAM sockets to emulate
>>>>>> DGRAM semantics.  Our advantage is that this is all local.  For all
>>>>>> practical purposes there's no chance data gets really lost.  Windows has
>>>>>> an almost indefinite send buffer.
>>>>>>
>>>>>> If you look at the STREAM as a kind of tunneling layer for getting DGRAM
>>>>>> messages over the (local) line, the DGRAM content could simply be
>>>>>> encapsulated in a tunnel packet or frame, basically the same way the
>>>>>> new, boring AF_UNIX code does it.  A DGRAM message encapsulated in a
>>>>>> STREAM message always has a header which at least contains the length of
>>>>>> the actual DGRAM message.  So when the peer reads from the socket, it
>>>>>> always only reads the header until it's complete.  Then it knows how
>>>>>> much payload is expected and then it reads until the payload has been
>>>>>> received.
>>>>>
>>>>> I think I'd like to go ahead and try to do this DGRAM emulation in the
>>>>> current (AF_LOCAL) code.  It shouldn't be too hard, and it would solve the
>>>>> unreliability problem while we look for a better way to handle AF_UNIX
>>>>> sockets.
>>>>
>>>> Yeah, sounds like the way to go for now.
>>>
>>> Unfortunately, I ran into a problem.  Trying to emulate DGRAM sockets in
>>> STREAM sockets breaks the DGRAM send/recv semantics.  For example,
>>> WSARecvFrom won't return the source address.
>>
>> It doesn't anyway, does it?  I mean, this is entirely local and the
>> source address is, basically, the same socket.
> 
>  From the Winsock point of view, the sending socket is an AF_INET socket, whose 
> name is a struct sockaddr_in (the crucial data being the port number). 
> fhandler_socket_local::recv_internal then converts the sockaddr_in of the sender 
> to an abstract sockaddr_un that encodes the port number, so that the receiver 
> can send back a reply.

Wait a minute.... I don't think this is a problem after all.  The sender can 
simply include its own address in the packet it sends, as in the new AF_UNIX code.

> Aside from this issue, there's also the fact that all the send/recv functions 
> when applied to STREAM sockets expect the socket to be connected.  But if we're 
> using STREAM sockets to emulate DGRAM sockets, they typically won't be 
> connected.  (And "connected" means something different for DGRAMs anyway.)

But I'm still worried about this issue.

Ken

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2021-05-22 18:21 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-27 15:47 The unreliability of AF_UNIX datagram sockets Ken Brown
2021-04-29 11:05 ` Corinna Vinschen
2021-04-29 11:16   ` Corinna Vinschen
2021-04-29 14:38   ` Ken Brown
2021-04-29 15:05     ` Corinna Vinschen
2021-04-29 15:18       ` Corinna Vinschen
2021-04-29 16:44       ` Ken Brown
2021-04-29 17:39         ` Corinna Vinschen
2021-05-01 21:41           ` Ken Brown
2021-05-03 10:30             ` Corinna Vinschen
2021-05-03 15:45               ` Corinna Vinschen
2021-05-03 16:56                 ` Ken Brown
2021-05-03 18:40                   ` Corinna Vinschen
2021-05-03 19:48                     ` Ken Brown
2021-05-03 20:50                       ` Ken Brown
2021-05-04 11:06                         ` Corinna Vinschen
2021-05-13 14:30                           ` Ken Brown
2021-05-17 10:26                             ` Corinna Vinschen
2021-05-17 13:02                               ` Ken Brown
2021-05-17 13:02                               ` Ken Brown
2021-05-20 13:46   ` Ken Brown
2021-05-20 19:25     ` Corinna Vinschen
2021-05-21 21:54       ` Ken Brown
2021-05-22 15:49         ` Corinna Vinschen
2021-05-22 16:50           ` Ken Brown
2021-05-22 18:21             ` Ken Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).