View Single Post
Old 08-20-2012, 04:14 AM   #75
chaley
Grand Sorcerer
chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.
 
Posts: 11,777
Karma: 7029857
Join Date: Jan 2010
Location: Notts, England
Device: Kobo Libra 2
Quote:
Originally Posted by kovidgoyal View Post
@charles: Just some thoughts about networking performance:

Is the 1/4 second round trip time independant of the amount of data being sent?
Within reason, yes. For example, we send books in 200K packets, which seem to take the same time as 100 bytes. As an experiment we tried sending entire books as one "packet". In this case we did see normal network latencies added to the turn-around latency. Unfortunately, we also made older devices fall over dead, so we had to pull it out.
Quote:
1) You can compress data, though you are probably constrained by the limited device capabilities.
Yes, we are. One "side benefit" of our decision to support Android 2 is that we are seeing devices that are truly limited and slow. In addition, as noted above the size of the packets doesn't seem to have much of an effect on the latency.
Quote:
2) It seems odd that threading would cause a 1/4 second roundtrip. If you can confirm that, say by writing a single threaded python program that talks to the device in test mode, then there might be something you can do to alleviate the problem. Some ideas: Use a more capable networking library, like zeromq. Or write a C extension that releases the GIL during each "session" of talking to the device.
I haven't tried it in python, but I did build a test bed in Java to debug the protocol. When the test bed is connected to the device, the latencies are dramatically reduced, down to around 50ms.

My uninformed guess is that we are competing for the GIL with the other calibre threads such as the UI and metadata backup. I don't have any proof for that, however.

What I am looking at now is "streaming" information in certain cases. The base protocol is stateless request/response. If we "streamed", then certain operations would become stateful request/response* operations. This should increase throughput by eliminating the turn-around latency per response. Of course, it also increases risk of running out of some resource on the device (they seem to be good at that). We will see.
chaley is offline   Reply With Quote