Opened 8 years ago
Closed 5 years ago
#1370 closed defect (fixed)
chunk size in network protocol
Reported by: | Dimitar Misev | Owned by: | Dimitar Misev |
---|---|---|---|
Priority: | major | Milestone: | 10.0 |
Component: | clientcomm | Version: | development |
Keywords: | Cc: | George Merticariu | |
Complexity: | Medium |
Description
When inserting binary data with rasql it seems to be relatively very slow (about 5x) compared to using directql. It appears that the data is chunked into quite small chunks for transport? I'm just wondering if rasnet has been stress tested on some bigger data.
Attached is a script that reproduces with a 1.4GB data file, rasql takes about 350s, directql about 60-70s.
Attachments (1)
Change History (7)
by , 8 years ago
comment:1 by , 8 years ago
Component: | rasnet → clientcomm |
---|
comment:2 by , 8 years ago
Does anybody know:
- How is the size of tiles that are sent to the client decided?
- If What is the maximum size of each chunk that is sent at a time via GRPC?
comment:3 by , 8 years ago
Summary: | rasnet performance issue with binary data → chunk size in network protocol |
---|
For encoded data like tiff it is hardcoded as far as I could see in source:raslib/rminit.cc:
r_Bytes RMInit::clientTileSize = 786432;
I tried setting it to simply the whole tile size, but for 1GB this failed with some stream error.
Any idea what would be the optimal size?
For binary data it seemed to be smaller (about 70000 bytes) and rasql was struggling with 100% CPU usage till it's shipped, I didn't investigate it much.
comment:4 by , 8 years ago
Google Protobuf has a limit of 64MB per message. I suggest increasing the limit to up o 60 MB for each tile.
comment:5 by , 8 years ago
Milestone: | Future → 10.0 |
---|---|
Owner: | changed from | to
Status: | new → assigned |
comment:6 by , 5 years ago
Resolution: | → fixed |
---|---|
Status: | assigned → closed |
The time is spent decomposing the large data into tiles on client side; the tile size doesn't have much effect on the transport speed.
I increased the RMInit::clientTileSize to 4MB (just under the grpc max message size of 4.1MB), and the total time went from 250s down to 45s; with this I think the issue is fixed.
By default the transfer tiles are around 700kb, I tested with 7MB and 70MB and there's no real difference.