public inbox for cygwin-developers@cygwin.com
 help / color / mirror / Atom feed
* Problems with the (new) implementation of AF_UNIX datagram sockets
@ 2021-04-14 16:15 Ken Brown
  2021-04-15 11:49 ` Corinna Vinschen
  0 siblings, 1 reply; 10+ messages in thread
From: Ken Brown @ 2021-04-14 16:15 UTC (permalink / raw)
  To: cygwin-devel

Hi Corinna,

This is a follow-up to

   https://cygwin.com/pipermail/cygwin/2021-April/248284.html

I don't know if you've been following that thread, but two serious problems with 
datagram sockets (on the topic/af_unix branch) have shown up.

1. Writing will block until a connection to the peer's pipe can be made.  In 
particular, if there are two consecutive writes with the same peer, the second 
one will block until the peer reads the first message.  This happens because the 
peer's pipe is not available for the second connection until the peer 
disconnects the first connection.  This is currently done in recvmsg, and I 
don't see a straightforward way to do it anywhere else.

2. There's no way for select to test whether a datagram socket is ready for 
writing.  That's because we can't know whether a connection to a hypothetical 
peer's pipe will be possible.  According to Stevens, the issue *should* be 
whether there's space in the socket's send buffer.  But our sockets don't have a 
send buffer until they connect to a pipe.

I think the solution to both problems is for Cygwin to maintain a send buffer 
for datagram sockets.  Does that seem right, or do you have another idea?

I haven't yet thought much about how to implement a send buffer, but I wanted to 
check with you before investing time in it.  And if you agree that this is the 
way to go, I'd appreciate any implementation suggestions you might have.

Thanks.

Ken

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Problems with the (new) implementation of AF_UNIX datagram sockets
  2021-04-14 16:15 Problems with the (new) implementation of AF_UNIX datagram sockets Ken Brown
@ 2021-04-15 11:49 ` Corinna Vinschen
  2021-04-15 13:16   ` Ken Brown
  0 siblings, 1 reply; 10+ messages in thread
From: Corinna Vinschen @ 2021-04-15 11:49 UTC (permalink / raw)
  To: cygwin-developers

Hi Ken,

On Apr 14 12:15, Ken Brown wrote:
> Hi Corinna,
> 
> This is a follow-up to
> 
>   https://cygwin.com/pipermail/cygwin/2021-April/248284.html
> 
> I don't know if you've been following that thread, but two serious problems
> with datagram sockets (on the topic/af_unix branch) have shown up.
> 
> 1. Writing will block until a connection to the peer's pipe can be made.  In
> particular, if there are two consecutive writes with the same peer, the
> second one will block until the peer reads the first message.  This happens
> because the peer's pipe is not available for the second connection until the
> peer disconnects the first connection.  This is currently done in recvmsg,
> and I don't see a straightforward way to do it anywhere else.

I'm a bit puzzeled.  The idea for datagrams was to call open/send/close
in each invocation of sendmsg.  Therefore the pipe should become
available as soon as the other peer has sent it's data block.  The time
a sendmsg has to wait for the pipe being available should be quite short!

> 2. There's no way for select to test whether a datagram socket is ready for
> writing.  That's because we can't know whether a connection to a
> hypothetical peer's pipe will be possible.  According to Stevens, the issue
> *should* be whether there's space in the socket's send buffer.  But our
> sockets don't have a send buffer until they connect to a pipe.

Even then, there's no guarantee a send will succeed, given that
select/send are not running atomically.  However, we *could* for a start
always return success from select for this scenario.  If we have a
nonblocking socket, it should fail opening the pipe and return EGAIN,
which is perfectly fine.  If we have a blocking socket, it could block
on send, which is perfectly valid, too, because of the non-atomicity.

Or am I missing something?

> I think the solution to both problems is for Cygwin to maintain a send
> buffer for datagram sockets.  Does that seem right, or do you have another
> idea?

In theory the send buffer should be a shared buffer between all peers,
so this could be constructed as a shared ring buffer, accessible from
af_unix_shmem_t.  But then again, this introduces a security problem,
so that's not a good idea.  So, process-local buffers.

But you also have the problem how to empty the buffer.  Do you start a
new thread which checks if the pipe is getting available and if so,
sends the buffer content?  In which process?  And what do you do if
there's still data in the send buffer when the process exits?  This is
annoyingly complicated and error-prone.

Another idea might be to implement send/recv on a DGRAM socket a bit
like accept.  Rather than creating a single_instance socket, we create a
max_instance socket as for STREAM socket listeners.  The server side
accepts the connection at recv and immediately opens another pipe
instance, so we always have at least one dangling instance for the next
peer.


Corinna


P.S.: Idle musings...

The only other implementation of AF_UNIX sockets using named pipes on
Windows I know of (U/WIN) implements the gory details as part of their
priviledged server process, i. e., their equivalent of cygserver.  The
difference is that the entire system is based on this server process, so
the U/WIN processes don't run at all if that service isn't running,
quite unlike Cygwin.  Requiring a server running just to allow AF_UNIX
sockets to work seems a bit off for us...

Having said that, maybe the idea to implement AF_UNIX sockets as named
pipes is... outdated?  Roughly 90% of our users are running a W10
version supporting AF_UNIX sockets natively (albeit missing native
SOCK_DGRAM support).  Perhaps it's time to switch...?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Problems with the (new) implementation of AF_UNIX datagram sockets
  2021-04-15 11:49 ` Corinna Vinschen
@ 2021-04-15 13:16   ` Ken Brown
  2021-04-15 13:58     ` Corinna Vinschen
  0 siblings, 1 reply; 10+ messages in thread
From: Ken Brown @ 2021-04-15 13:16 UTC (permalink / raw)
  To: cygwin-developers

On 4/15/2021 7:49 AM, Corinna Vinschen wrote:
> Hi Ken,
> 
> On Apr 14 12:15, Ken Brown wrote:
>> Hi Corinna,
>>
>> This is a follow-up to
>>
>>    https://cygwin.com/pipermail/cygwin/2021-April/248284.html
>>
>> I don't know if you've been following that thread, but two serious problems
>> with datagram sockets (on the topic/af_unix branch) have shown up.
>>
>> 1. Writing will block until a connection to the peer's pipe can be made.  In
>> particular, if there are two consecutive writes with the same peer, the
>> second one will block until the peer reads the first message.  This happens
>> because the peer's pipe is not available for the second connection until the
>> peer disconnects the first connection.  This is currently done in recvmsg,
>> and I don't see a straightforward way to do it anywhere else.
> 
> I'm a bit puzzeled.  The idea for datagrams was to call open/send/close
> in each invocation of sendmsg.  Therefore the pipe should become
> available as soon as the other peer has sent it's data block.  The time
> a sendmsg has to wait for the pipe being available should be quite short!

Unfortunately, the pipe isn't available until the server disconnects.  I 
observed this in practice, and it's also documented at

https://docs.microsoft.com/en-us/windows/win32/api/namedpipeapi/nf-namedpipeapi-disconnectnamedpipe

"The server process must call DisconnectNamedPipe to disconnect a pipe handle 
from its previous client before the handle can be connected to another client by 
using the ConnectNamedPipe function."

>> 2. There's no way for select to test whether a datagram socket is ready for
>> writing.  That's because we can't know whether a connection to a
>> hypothetical peer's pipe will be possible.  According to Stevens, the issue
>> *should* be whether there's space in the socket's send buffer.  But our
>> sockets don't have a send buffer until they connect to a pipe.
> 
> Even then, there's no guarantee a send will succeed, given that
> select/send are not running atomically.  However, we *could* for a start
> always return success from select for this scenario.  If we have a
> nonblocking socket, it should fail opening the pipe and return EGAIN,
> which is perfectly fine.  If we have a blocking socket, it could block
> on send, which is perfectly valid, too, because of the non-atomicity.
> 
> Or am I missing something?

No, I was missing the non-atomicity.  So maybe that's OK.

>> I think the solution to both problems is for Cygwin to maintain a send
>> buffer for datagram sockets.  Does that seem right, or do you have another
>> idea?
> 
> In theory the send buffer should be a shared buffer between all peers,
> so this could be constructed as a shared ring buffer, accessible from
> af_unix_shmem_t.  But then again, this introduces a security problem,
> so that's not a good idea.  So, process-local buffers.
> 
> But you also have the problem how to empty the buffer.  Do you start a
> new thread which checks if the pipe is getting available and if so,
> sends the buffer content?  In which process?  And what do you do if
> there's still data in the send buffer when the process exits?  This is
> annoyingly complicated and error-prone.

Agreed.

> Another idea might be to implement send/recv on a DGRAM socket a bit
> like accept.  Rather than creating a single_instance socket, we create a
> max_instance socket as for STREAM socket listeners.  The server side
> accepts the connection at recv and immediately opens another pipe
> instance, so we always have at least one dangling instance for the next
> peer.

I thought about that, but you would still have the problem (as in 1 above) that 
the pipe instance isn't available until recv is called.

> 
> Corinna
> 
> 
> P.S.: Idle musings...
> 
> The only other implementation of AF_UNIX sockets using named pipes on
> Windows I know of (U/WIN) implements the gory details as part of their
> priviledged server process, i. e., their equivalent of cygserver.  The
> difference is that the entire system is based on this server process, so
> the U/WIN processes don't run at all if that service isn't running,
> quite unlike Cygwin.  Requiring a server running just to allow AF_UNIX
> sockets to work seems a bit off for us...
> 
> Having said that, maybe the idea to implement AF_UNIX sockets as named
> pipes is... outdated?  Roughly 90% of our users are running a W10
> version supporting AF_UNIX sockets natively (albeit missing native
> SOCK_DGRAM support).  Perhaps it's time to switch...?

Maybe so.

Ken

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Problems with the (new) implementation of AF_UNIX datagram sockets
  2021-04-15 13:16   ` Ken Brown
@ 2021-04-15 13:58     ` Corinna Vinschen
  2021-04-15 14:53       ` Ken Brown
  0 siblings, 1 reply; 10+ messages in thread
From: Corinna Vinschen @ 2021-04-15 13:58 UTC (permalink / raw)
  To: cygwin-developers

On Apr 15 09:16, Ken Brown wrote:
> On 4/15/2021 7:49 AM, Corinna Vinschen wrote:
> > On Apr 14 12:15, Ken Brown wrote:
> [...]
> > > 1. Writing will block until a connection to the peer's pipe can be
> > > made.  In particular, if there are two consecutive writes with the
> > > same peer, the second one will block until the peer reads the
> > > first message.  This happens because the peer's pipe is not
> > > available for the second connection until the peer disconnects the
> > > first connection.  This is currently done in recvmsg,
> > > and I don't see a straightforward way to do it anywhere else.
> > 
> > I'm a bit puzzeled.  The idea for datagrams was to call open/send/close
> > in each invocation of sendmsg.  Therefore the pipe should become
> > available as soon as the other peer has sent it's data block.  The time
> > a sendmsg has to wait for the pipe being available should be quite short!
> 
> Unfortunately, the pipe isn't available until the server disconnects.  I
> observed this in practice, and it's also documented at
> 
> https://docs.microsoft.com/en-us/windows/win32/api/namedpipeapi/nf-namedpipeapi-disconnectnamedpipe
> 
> "The server process must call DisconnectNamedPipe to disconnect a pipe
> handle from its previous client before the handle can be connected to
> another client by using the ConnectNamedPipe function."

d'oh

> [...]
> > Another idea might be to implement send/recv on a DGRAM socket a bit
> > like accept.  Rather than creating a single_instance socket, we create a
> > max_instance socket as for STREAM socket listeners.  The server side
> > accepts the connection at recv and immediately opens another pipe
> > instance, so we always have at least one dangling instance for the next
> > peer.
> 
> I thought about that, but you would still have the problem (as in 1 above)
> that the pipe instance isn't available until recv is called.

There always is at least one instance.  Do you mean, two clients are
trying to send while the server is idly playing with his toes?


Corinna

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Problems with the (new) implementation of AF_UNIX datagram sockets
  2021-04-15 13:58     ` Corinna Vinschen
@ 2021-04-15 14:53       ` Ken Brown
  2021-04-15 23:50         ` Mark Geisert
  0 siblings, 1 reply; 10+ messages in thread
From: Ken Brown @ 2021-04-15 14:53 UTC (permalink / raw)
  To: cygwin-developers

On 4/15/2021 9:58 AM, Corinna Vinschen wrote:
> On Apr 15 09:16, Ken Brown wrote:
>> On 4/15/2021 7:49 AM, Corinna Vinschen wrote:
>>> On Apr 14 12:15, Ken Brown wrote:
>> [...]
>>>> 1. Writing will block until a connection to the peer's pipe can be
>>>> made.  In particular, if there are two consecutive writes with the
>>>> same peer, the second one will block until the peer reads the
>>>> first message.  This happens because the peer's pipe is not
>>>> available for the second connection until the peer disconnects the
>>>> first connection.  This is currently done in recvmsg,
>>>> and I don't see a straightforward way to do it anywhere else.
>>>
>>> I'm a bit puzzeled.  The idea for datagrams was to call open/send/close
>>> in each invocation of sendmsg.  Therefore the pipe should become
>>> available as soon as the other peer has sent it's data block.  The time
>>> a sendmsg has to wait for the pipe being available should be quite short!
>>
>> Unfortunately, the pipe isn't available until the server disconnects.  I
>> observed this in practice, and it's also documented at
>>
>> https://docs.microsoft.com/en-us/windows/win32/api/namedpipeapi/nf-namedpipeapi-disconnectnamedpipe
>>
>> "The server process must call DisconnectNamedPipe to disconnect a pipe
>> handle from its previous client before the handle can be connected to
>> another client by using the ConnectNamedPipe function."
> 
> d'oh
> 
>> [...]
>>> Another idea might be to implement send/recv on a DGRAM socket a bit
>>> like accept.  Rather than creating a single_instance socket, we create a
>>> max_instance socket as for STREAM socket listeners.  The server side
>>> accepts the connection at recv and immediately opens another pipe
>>> instance, so we always have at least one dangling instance for the next
>>> peer.
>>
>> I thought about that, but you would still have the problem (as in 1 above)
>> that the pipe instance isn't available until recv is called.
> 
> There always is at least one instance.  Do you mean, two clients are
> trying to send while the server is idly playing with his toes?

Yes.  That was essentially the situation in the test case attached to

   https://cygwin.com/pipermail/cygwin/2021-April/248210.html

It was actually one client sending many messages while the server was playing 
with his toes, but the effect was the same.

Ken

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Problems with the (new) implementation of AF_UNIX datagram sockets
  2021-04-15 14:53       ` Ken Brown
@ 2021-04-15 23:50         ` Mark Geisert
  2021-04-16  9:37           ` Corinna Vinschen
  0 siblings, 1 reply; 10+ messages in thread
From: Mark Geisert @ 2021-04-15 23:50 UTC (permalink / raw)
  To: cygwin-developers

Ken Brown wrote:
> On 4/15/2021 9:58 AM, Corinna Vinschen wrote:
>> On Apr 15 09:16, Ken Brown wrote:
>>> On 4/15/2021 7:49 AM, Corinna Vinschen wrote:
>>>> On Apr 14 12:15, Ken Brown wrote:
>>> [...]
>>>>> 1. Writing will block until a connection to the peer's pipe can be
>>>>> made.  In particular, if there are two consecutive writes with the
>>>>> same peer, the second one will block until the peer reads the
>>>>> first message.  This happens because the peer's pipe is not
>>>>> available for the second connection until the peer disconnects the
>>>>> first connection.  This is currently done in recvmsg,
>>>>> and I don't see a straightforward way to do it anywhere else.
>>>>
>>>> I'm a bit puzzeled.  The idea for datagrams was to call open/send/close
>>>> in each invocation of sendmsg.  Therefore the pipe should become
>>>> available as soon as the other peer has sent it's data block.  The time
>>>> a sendmsg has to wait for the pipe being available should be quite short!
>>>
>>> Unfortunately, the pipe isn't available until the server disconnects.  I
>>> observed this in practice, and it's also documented at
>>>
>>> https://docs.microsoft.com/en-us/windows/win32/api/namedpipeapi/nf-namedpipeapi-disconnectnamedpipe 
>>>
>>>
>>> "The server process must call DisconnectNamedPipe to disconnect a pipe
>>> handle from its previous client before the handle can be connected to
>>> another client by using the ConnectNamedPipe function."
>>
>> d'oh
>>
>>> [...]
>>>> Another idea might be to implement send/recv on a DGRAM socket a bit
>>>> like accept.  Rather than creating a single_instance socket, we create a
>>>> max_instance socket as for STREAM socket listeners.  The server side
>>>> accepts the connection at recv and immediately opens another pipe
>>>> instance, so we always have at least one dangling instance for the next
>>>> peer.
>>>
>>> I thought about that, but you would still have the problem (as in 1 above)
>>> that the pipe instance isn't available until recv is called.
>>
>> There always is at least one instance.  Do you mean, two clients are
>> trying to send while the server is idly playing with his toes?
> 
> Yes.  That was essentially the situation in the test case attached to
> 
>    https://cygwin.com/pipermail/cygwin/2021-April/248210.html
> 
> It was actually one client sending many messages while the server was playing with 
> his toes, but the effect was the same.

Sending datagrams between processes on the same system could be thought of as 
similar to sending/receiving messages on a POSIX message queue.  Though the mq_* 
man pages make it seem like mqs are intended for within-process messaging.  But if 
a datagram receiver created a message queue that datagram senders could open, 
couldn't that provide buffering and allow multiple clients?  Kindly ignore if insane.

..mark

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Problems with the (new) implementation of AF_UNIX datagram sockets
  2021-04-15 23:50         ` Mark Geisert
@ 2021-04-16  9:37           ` Corinna Vinschen
  2021-04-17  2:54             ` Mark Geisert
  0 siblings, 1 reply; 10+ messages in thread
From: Corinna Vinschen @ 2021-04-16  9:37 UTC (permalink / raw)
  To: cygwin-developers

On Apr 15 16:50, Mark Geisert wrote:
> Ken Brown wrote:
> > On 4/15/2021 9:58 AM, Corinna Vinschen wrote:
> > > On Apr 15 09:16, Ken Brown wrote:
> > > > On 4/15/2021 7:49 AM, Corinna Vinschen wrote:
> > > > [...]
> > > > > Another idea might be to implement send/recv on a DGRAM socket a bit
> > > > > like accept.  Rather than creating a single_instance socket, we create a
> > > > > max_instance socket as for STREAM socket listeners.  The server side
> > > > > accepts the connection at recv and immediately opens another pipe
> > > > > instance, so we always have at least one dangling instance for the next
> > > > > peer.
> > > > 
> > > > I thought about that, but you would still have the problem (as in 1 above)
> > > > that the pipe instance isn't available until recv is called.
> > > 
> > > There always is at least one instance.  Do you mean, two clients are
> > > trying to send while the server is idly playing with his toes?
> > 
> > Yes.  That was essentially the situation in the test case attached to
> > 
> >    https://cygwin.com/pipermail/cygwin/2021-April/248210.html
> > 
> > It was actually one client sending many messages while the server was
> > playing with his toes, but the effect was the same.
> 
> Sending datagrams between processes on the same system could be thought of
> as similar to sending/receiving messages on a POSIX message queue.  Though
> the mq_* man pages make it seem like mqs are intended for within-process
> messaging.  But if a datagram receiver created a message queue that datagram
> senders could open, couldn't that provide buffering and allow multiple
> clients?  Kindly ignore if insane.

Interesting idea, actually.  Message queues already implement a lot of
what a unix socket needs in terms of sending/receiving data.  The pipe
would only be needed for credential and descriptor passing, ultimately :)



Corinna

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Problems with the (new) implementation of AF_UNIX datagram sockets
  2021-04-16  9:37           ` Corinna Vinschen
@ 2021-04-17  2:54             ` Mark Geisert
  2021-04-17 16:05               ` Ken Brown
  0 siblings, 1 reply; 10+ messages in thread
From: Mark Geisert @ 2021-04-17  2:54 UTC (permalink / raw)
  To: cygwin-developers

Corinna Vinschen wrote:
> On Apr 15 16:50, Mark Geisert wrote:
>> Ken Brown wrote:
>>> On 4/15/2021 9:58 AM, Corinna Vinschen wrote:
>>>> On Apr 15 09:16, Ken Brown wrote:
>>>>> On 4/15/2021 7:49 AM, Corinna Vinschen wrote:
>>>>> [...]
>>>>>> Another idea might be to implement send/recv on a DGRAM socket a bit
>>>>>> like accept.  Rather than creating a single_instance socket, we create a
>>>>>> max_instance socket as for STREAM socket listeners.  The server side
>>>>>> accepts the connection at recv and immediately opens another pipe
>>>>>> instance, so we always have at least one dangling instance for the next
>>>>>> peer.
>>>>>
>>>>> I thought about that, but you would still have the problem (as in 1 above)
>>>>> that the pipe instance isn't available until recv is called.
>>>>
>>>> There always is at least one instance.  Do you mean, two clients are
>>>> trying to send while the server is idly playing with his toes?
>>>
>>> Yes.  That was essentially the situation in the test case attached to
>>>
>>>     https://cygwin.com/pipermail/cygwin/2021-April/248210.html
>>>
>>> It was actually one client sending many messages while the server was
>>> playing with his toes, but the effect was the same.
>>
>> Sending datagrams between processes on the same system could be thought of
>> as similar to sending/receiving messages on a POSIX message queue.  Though
>> the mq_* man pages make it seem like mqs are intended for within-process
>> messaging.  But if a datagram receiver created a message queue that datagram
>> senders could open, couldn't that provide buffering and allow multiple
>> clients?  Kindly ignore if insane.
> 
> Interesting idea, actually.  Message queues already implement a lot of
> what a unix socket needs in terms of sending/receiving data.  The pipe
> would only be needed for credential and descriptor passing, ultimately :)

One might be able to deal with credentials/descriptor passing within the message 
queue by using message priority to distinguish the "message" types.  mq_receive() 
always gives you the oldest, highest priority, message available in the queue.

I'll have to look over the usual DGRAM references again, but OTTOMH if credentials 
are just euids and egids maybe they could be handled as permissions on the file 
backing the message queue.  If the filename (in a particular name space we set up) 
is just the port number one could treat ENOENT as meaning nobody listening on that 
port, while EPERM could result from credentials not matching the file's 
permissions.  Makes some sense but I'm unsure if it covers all needs.

..mark

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Problems with the (new) implementation of AF_UNIX datagram sockets
  2021-04-17  2:54             ` Mark Geisert
@ 2021-04-17 16:05               ` Ken Brown
  2021-04-19  8:48                 ` Corinna Vinschen
  0 siblings, 1 reply; 10+ messages in thread
From: Ken Brown @ 2021-04-17 16:05 UTC (permalink / raw)
  To: cygwin-developers

On 4/16/2021 10:54 PM, Mark Geisert wrote:
> Corinna Vinschen wrote:
>> On Apr 15 16:50, Mark Geisert wrote:
>>> Ken Brown wrote:
>>>> On 4/15/2021 9:58 AM, Corinna Vinschen wrote:
>>>>> On Apr 15 09:16, Ken Brown wrote:
>>>>>> On 4/15/2021 7:49 AM, Corinna Vinschen wrote:
>>>>>> [...]
>>>>>>> Another idea might be to implement send/recv on a DGRAM socket a bit
>>>>>>> like accept.  Rather than creating a single_instance socket, we create a
>>>>>>> max_instance socket as for STREAM socket listeners.  The server side
>>>>>>> accepts the connection at recv and immediately opens another pipe
>>>>>>> instance, so we always have at least one dangling instance for the next
>>>>>>> peer.
>>>>>>
>>>>>> I thought about that, but you would still have the problem (as in 1 above)
>>>>>> that the pipe instance isn't available until recv is called.
>>>>>
>>>>> There always is at least one instance.  Do you mean, two clients are
>>>>> trying to send while the server is idly playing with his toes?
>>>>
>>>> Yes.  That was essentially the situation in the test case attached to
>>>>
>>>>     https://cygwin.com/pipermail/cygwin/2021-April/248210.html
>>>>
>>>> It was actually one client sending many messages while the server was
>>>> playing with his toes, but the effect was the same.
>>>
>>> Sending datagrams between processes on the same system could be thought of
>>> as similar to sending/receiving messages on a POSIX message queue.  Though
>>> the mq_* man pages make it seem like mqs are intended for within-process
>>> messaging.  But if a datagram receiver created a message queue that datagram
>>> senders could open, couldn't that provide buffering and allow multiple
>>> clients?  Kindly ignore if insane.
>>
>> Interesting idea, actually.  Message queues already implement a lot of
>> what a unix socket needs in terms of sending/receiving data.  The pipe
>> would only be needed for credential and descriptor passing, ultimately :)
> 
> One might be able to deal with credentials/descriptor passing within the message 
> queue by using message priority to distinguish the "message" types.  
> mq_receive() always gives you the oldest, highest priority, message available in 
> the queue.
> 
> I'll have to look over the usual DGRAM references again, but OTTOMH if 
> credentials are just euids and egids maybe they could be handled as permissions 
> on the file backing the message queue.  If the filename (in a particular name 
> space we set up) is just the port number one could treat ENOENT as meaning 
> nobody listening on that port, while EPERM could result from credentials not 
> matching the file's permissions.  Makes some sense but I'm unsure if it covers 
> all needs.

A couple of comments:

First, I don't think we want to limit this to DGRAM sockets.  The code in 
fhandler_socket_unix.cc already packages I/O into packets (see 
af_unix_pkt_hdr_t), for both the STREAM and DGRAM cases.  We could just treat 
each packet as a message.  In the STREAM case we would have to deal with the 
case of a partial read, but I think I see how to do that.

Second, I don't think we need to invent a new way of handling credentials.  We 
already have send_sock_info and recv_peer_info.  The only question is whether we 
use a pipe or a message queue.  Corinna, what was your reason for saying we need 
the pipe for that.  Are there security issues with using a message queue?

Ken

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Problems with the (new) implementation of AF_UNIX datagram sockets
  2021-04-17 16:05               ` Ken Brown
@ 2021-04-19  8:48                 ` Corinna Vinschen
  0 siblings, 0 replies; 10+ messages in thread
From: Corinna Vinschen @ 2021-04-19  8:48 UTC (permalink / raw)
  To: cygwin-developers

On Apr 17 12:05, Ken Brown wrote:
> On 4/16/2021 10:54 PM, Mark Geisert wrote:
> > Corinna Vinschen wrote:
> > > On Apr 15 16:50, Mark Geisert wrote:
> > > > Sending datagrams between processes on the same system could be thought of
> > > > as similar to sending/receiving messages on a POSIX message queue.  Though
> > > > the mq_* man pages make it seem like mqs are intended for within-process
> > > > messaging.  But if a datagram receiver created a message queue that datagram
> > > > senders could open, couldn't that provide buffering and allow multiple
> > > > clients?  Kindly ignore if insane.
> > > 
> > > Interesting idea, actually.  Message queues already implement a lot of
> > > what a unix socket needs in terms of sending/receiving data.  The pipe
> > > would only be needed for credential and descriptor passing, ultimately :)
> > 
> > One might be able to deal with credentials/descriptor passing within the
> > message queue by using message priority to distinguish the "message"
> > types.  mq_receive() always gives you the oldest, highest priority,
> > message available in the queue.
> > 
> > I'll have to look over the usual DGRAM references again, but OTTOMH if
> > credentials are just euids and egids maybe they could be handled as
> > permissions on the file backing the message queue.  If the filename (in
> > a particular name space we set up) is just the port number one could
> > treat ENOENT as meaning nobody listening on that port, while EPERM could
> > result from credentials not matching the file's permissions.  Makes some
> > sense but I'm unsure if it covers all needs.
> 
> A couple of comments:
> 
> First, I don't think we want to limit this to DGRAM sockets.  The code in
> fhandler_socket_unix.cc already packages I/O into packets (see
> af_unix_pkt_hdr_t), for both the STREAM and DGRAM cases.  We could just
> treat each packet as a message.  In the STREAM case we would have to deal
> with the case of a partial read, but I think I see how to do that.
> 
> Second, I don't think we need to invent a new way of handling credentials.
> We already have send_sock_info and recv_peer_info.  The only question is
> whether we use a pipe or a message queue.  Corinna, what was your reason for
> saying we need the pipe for that.  Are there security issues with using a
> message queue?

Long-standing problem.  The peer sends uid/gid values, but how's the
server to know that these values are correct?  The idea was to replace
this at one point by a server-side call to ImpersonateNamedPipeClient
and to retrieve the credentials of the peer in the server, so we
actually *know* whom we're talking with.


Corinna

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-04-19  8:48 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-14 16:15 Problems with the (new) implementation of AF_UNIX datagram sockets Ken Brown
2021-04-15 11:49 ` Corinna Vinschen
2021-04-15 13:16   ` Ken Brown
2021-04-15 13:58     ` Corinna Vinschen
2021-04-15 14:53       ` Ken Brown
2021-04-15 23:50         ` Mark Geisert
2021-04-16  9:37           ` Corinna Vinschen
2021-04-17  2:54             ` Mark Geisert
2021-04-17 16:05               ` Ken Brown
2021-04-19  8:48                 ` Corinna Vinschen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).