First - what is the nagle algorithm and its purpose?
The Nagle algorithm's basic idea is to queue up smaller amounts data and do not send it if there is current sent data waiting to be confirmed. If we are sending a large packet, then there is no effect. The idea is to prevent small packets and the header information on each packet from clogging the network if we can queue some of that data (since we're waiting for a send acknowledgment anyways) and send it once the pending send has been confirmed.
More details can be found at:
The magic value here is MSS - Maximum Segment Size and this specifies the maximum size that can be transmitted. Upon establishing
There's a simple way to test the effects of Nagle and how it compares between using sockets and remoting. Remoting can wrap your data in headers, thus
I find myself at times saying "Nagle is bad in X scenario, RDP for instance where a user expects near real time response", so at times it does make sense to turn it off.
There is a great article on this subject at:
In it, Alun Jones makes a great point. Basically if you have gains because Nagle is turned off either your application is broke or the protocol is broke. Take RDP for example. A user expects a real time response. Since we are currently in a TCP/IP connected world, TCP/IP is basically a must for a remote connection. Therefore, the protocol is broke (for our required use.. although I hesitate to use the word broke, rather than unfitting)
A simple test to see the effects can be duplicated by the following scenario.
1. Use .Net remoting and make a simple method call that takes a string, but does nothing with it. (remoting in Singleton mode so you have one instance of the class responding to requests)
2. Simple socket call to send a small string (less than 20
bytes) to a method.
The remoting call is significantly faster (107 compared to
the sockets 58 requests per second in one test)
Sockets are simple on the receive end (and the send):
Socket ss = new
ss.Listen(100); //100 in the backlog connection queue
Socket sock = ss.Accept();
byte read = new byte;
int bytes = sock.Receive(read, 1024, 0);
//Ideally you would spawn off another thread here to handle .. results are similiar, this is done here for simplicity
string messagefromclient = System.Text.Encoding.ASCII.GetString(read,0,bytes);
Lets say in this scenario this is a mission critical authentication application that requires high throughput. The login packet may be extremely small. Having Nagle enabled can slow this down incredibly. Certain applications may say this is acceptable - for instance game development. A small movement on the screen usually equates to a small packet of data to be sent. Waiting for the maximum segment size to be reached before transmitting means data can just be queued up if a current acknowledgment is pending. In the fast paced realm of gaming where near instantaneous response is required, Nagle can slow the game down significantly. In this case it seems Alun Jones would say the protocol is broken, as disabling Nagle yields a good benefit. One could argue here then why even use TCP? Why not just use UDP? UDP after all doesn't concern us with packet size issues. It just sends. Theres no concept of connection, it just goes there. But.. will it arrive? Possibly. Do you know that you receive packets in order? NO!
You have to write something into your system to handle these issues, that's the tradeoffs with UDP, and mostly I'd say this pertains to games.
There is an excellent article on this for those that are interested in reading more (including more info on sequencing packets)
With all this said, if you have to have data integrity, order, and connections, then stay with tcp. If you cannot accept the performance hit, then either switch protocols or accept the performance hit you will take to guarantee your data's order (without writing your own management routines with udp as outlines in the above url)