rpcrt4: Implement RpcMgmtWaitServerListen

Dan Hipschman dsh at linux.ucla.edu
Thu Apr 12 13:14:41 CDT 2007


On Thu, Apr 12, 2007 at 11:47:05AM +0100, Robert Shearman wrote:
> Dan Hipschman wrote:
> >@@ -94,7 +95,8 @@ struct connection_ops {
> >   RpcConnection *(*alloc)(void);
> >   RPC_STATUS (*open_connection_client)(RpcConnection *conn);
> >   RPC_STATUS (*handoff)(RpcConnection *old_conn, RpcConnection *new_conn);
> >-  int (*read)(RpcConnection *conn, void *buffer, unsigned int len);
> >+  int (*read)(RpcConnection *conn, void *buffer, unsigned int len, BOOL 
> >check_stop_event);
> >+  int (*signal_to_stop)(RpcConnection *conn);
> >   int (*write)(RpcConnection *conn, const void *buffer, unsigned int len);
> >   int (*close)(RpcConnection *conn);
> >   size_t (*get_top_of_tower)(unsigned char *tower_data, const char 
> >   *networkaddr, const char *endpoint);
> >  
> 
> Hmm, I'm not sure it needs to be this complicated.

I'm not sure it does, either, but it's the simplest thing I can think of
at the moment.  Each connection basically just needs to wait for one of
two events: (1) an incoming RPC, or (2) a shutdown request.  Since the
existing code blocks on a read waiting for an incoming packet, and hence
can't poll some global state variable or something, the simplest thing I
could think of was to just create an event for shutdowns and multiplex
them.  If this were pure POSIX, I might consider using signals.  I could
also try making the read time out and check a state variable every once
in a while, but that just seemed sloppy.  The patch I went with doesn't
add all that much code, so it seemed reasonably simple to me, but maybe
I'm missing something better.

One thing I don't particularly like about the solution is that it
creates an event / pipe for each connection, when it could probably
create a single event / pipe for each protocol, but that would add
synchronization complexity, so I opted for the simpler, less efficient
code.  I can go either way, though.

> >   HeapFree(GetProcessHeap(), 0, msg);
> > }
> > 
> >-static DWORD CALLBACK RPCRT4_worker_thread(LPVOID the_arg)
> >-{
> >-  RpcPacket *pkt = the_arg;
> >-  RPCRT4_process_packet(pkt->conn, pkt->hdr, pkt->msg);
> >-  HeapFree(GetProcessHeap(), 0, pkt);
> >-  return 0;
> >-}
> >-
> > static DWORD CALLBACK RPCRT4_io_thread(LPVOID the_arg)
> > {
> >   RpcConnection* conn = (RpcConnection*)the_arg;
> >@@ -319,10 +322,14 @@ static DWORD CALLBACK RPCRT4_io_thread(LPVOID 
> >the_arg)
> >   RpcBinding *pbind;
> >   RPC_MESSAGE *msg;
> >   RPC_STATUS status;
> >-  RpcPacket *packet;
> > 
> >   TRACE("(%p)\n", conn);
> > 
> >+  EnterCriticalSection(&client_connections_cs);
> >+  list_add_head(&client_connections, &conn->client_entry);
> >+  ResetEvent(clients_completed_event);
> >+  LeaveCriticalSection(&client_connections_cs);
> >+
> >   for (;;) {
> >     msg = HeapAlloc(GetProcessHeap(), HEAP_ZERO_MEMORY, 
> >     sizeof(RPC_MESSAGE));
> > 
> >@@ -338,17 +345,17 @@ static DWORD CALLBACK RPCRT4_io_thread(LPVOID 
> >the_arg)
> >       break;
> >     }
> > 
> >-#if 0
> >     RPCRT4_process_packet(conn, hdr, msg);
> >-#else
> >-    packet = HeapAlloc(GetProcessHeap(), 0, sizeof(RpcPacket));
> >-    packet->conn = conn;
> >-    packet->hdr = hdr;
> >-    packet->msg = msg;
> >-    QueueUserWorkItem(RPCRT4_worker_thread, packet, 
> >WT_EXECUTELONGFUNCTION);
> >-#endif
> >-    msg = NULL;
> >   }
> >+
> >+  EnterCriticalSection(&client_connections_cs);
> >+  list_remove(&conn->client_entry);
> >+  if (list_empty(&client_connections)) {
> >+    TRACE("last in the list to complete (%p)\n", conn);
> >+    SetEvent(clients_completed_event);
> >+  }
> >+  LeaveCriticalSection(&client_connections_cs);
> >+
> >   RPCRT4_DestroyConnection(conn);
> >   return 0;
> > }
> >  
> 
> I'm not sure your reasoning for doing this. If I'm not mistaken, this 
> change makes it so that only one RPC call at a time is processed.

I took out the thread pool stuff because it just makes it harder to wait
for all the RPCs to complete.  As far as I can tell, the control flow
goes like this: (1) a server thread (one per protocol / port) accepts
connections from clients; (2) when the server thread gets a connection,
it creates an I/O thread and goes back to listening; (3) the I/O thread
waits for RPC packets and processes them.  It should still allow
concurrent processing of RPCs from different client connections,
although it would only allow one RPC per client to be processed at a
time.  If the client had multiple threads each making RPCs to the same
server over the same connection, then, yea, this could degrade
performance.  It might not be hard to put the thread pool back in.  I'll
see what I can do.

I'll try resubmitting this again in a bit, as I've tweaked the original
a little and added more tests, but if you or Alexandre still have some
fundamental beef with it, I can try doing things totally different.
Thanks for the criticism.

Dan




More information about the wine-devel mailing list