One of the attendees of my recent webinar on sockets programming for Windows Phone (you can watch it online on Silverlight Show here) brought up a rather complex issue in the Q&A session that I then took offline for further review. As it turns out, what he was facing is not one, but even two bugs in (desktop) Silverlight's multicast client implementation. At first I thought this may be a problem with Silverlight 5, but amazingly everything described here seems to be valid for Silverlight 4 too. I don't know whether this also affects Windows Phone, but since the platforms presumably share that code it's likely.
Silverlight provides two client variants, one for handling traffic from a single source (SSM) and a second one for multiple/any sources (ASM/ISM). The sample I was analyzing showed a problem with the latter one, which is named UdpAnySourceMulticastClient in Silverlight. The idea behind that client is that you can not only send data to a multicast group, but also to a specific member of that group directly (peer to peer), using similar methods (SendTo) as with normal UDP sockets.
Internally, the multicast client implementations (obviously) use sockets. The first problem with the ASM client is that at the time it creates its internal UDP socket, for unknown reasons it initially configures it to block all incoming packets. Unfortunately it fails to reset this configuration later, and when a user of that client invokes BeginReceiveFromGroup, the underlying socket decides to use the code path of filtering data, based on that initial configuration. That means that even though the ASM client would be able to deal with multiple peers, it never has the chance to because its socket is now filtering any traffic from non-multicast source addresses. You as the original caller are never notified of incoming packets from peers.
… and part 2
While the previously described problem just seems like a glitch in the implementation and in particular in the initialization of the used socket, the second problem is a bit more severe and conceptual.
The way the socket filters messages is that it simply does not finish the pending async operation (which would signal completion to the initial caller). Instead, it simply ignores what has been received and kicks off a new receive operation to capture the next packet that comes in. The problem with this is that even though this is completely transparent to the caller, by the time the socket realizes that a packet should be filtered the receive buffer that has been passed in by the original caller already contains the contents of that packet. The socket, for example, does not make any attempts to clear that buffer or restore it to its original state before it starts the next receive operation.
As a result, once a valid packet is received and the original caller notified, the buffer might still contain fragments from one or more previously received, discarded packets (if the valid packet is smaller than the previously filtered one). This was exactly what was happening in the sample application I analyzed. Coincidentally, the whole received message (a combination of the packet that should be reported and the fragments of a previously discarded packet) still looked like a validly formatted application message, but with bogus data in it.
With TCP, you would implement some sort of protocol that allows you to reassemble application level messages properly, for example by using a length-prefixed format, or some sort of termination tokens. With UDP however, each datagram is separated per se, and people assume that e.g. if they pass in a buffer of zeroed bytes they're able to retrieve an application level message simply by trimming the remaining zero bytes from that buffer later. In this particular case, a strategy like that fails, and in extreme cases would allow a malicious sender of UDP datagrams to inject data into your application, even if its address was actively blocked(!).
The first part cannot be worked around at the moment, I think; there's no way to access/reconfigure the socket properly from your code. So the solution is to not make use of the peer to peer features of multicast when you communicate with Silverlight clients, and/or use separate sockets for this instead.
Thinkable workarounds for the second problem (that still applies) are the above mentioned approaches to generate a more secure application protocol even tough you're using UDP: that is, use length-prefixed or properly terminated messages so you're able to discard any bogus data that might still be left in the receive buffer from previously filtered messages .
Thoughts, corrections and other comments are welcome :).
Edit: the Connect entry for this, with additional details/a summary can be found here.