I noticed that the Monero chain compresses about 60%, would it be possible to compress blocks before sending them from a remote node to a syncing wallet thus saving a big chunk of bandwidth and time?
Does anyone know if this is already happening during sync, or if not why?
edit: this can be done using ssh tunnels, if you have ssh access to your remote server. The “-C” option enables compression.
ssh -C -NL 18089:localhost:18089 server_username@server_address
Now you can set your wallet to 127.0.0.1:18089 and now your wallet syncing should be faster, enjoy!
How come the fees and which node (public or not) made such a difference if the issue was processing power? The bottleneck?
Well so the bandwidth of the remote node is a potential bottleneck, as well as the bandwidth of the person syncing. Whichever is smaller is going to be max rate at which data is sent, ignoring the connection path of course for simplicity. It can affect the speed of sync significantly. If you’ve got a powerful computer that can do a ton of operations per second and check a ton of blocks for transactions, your bottleneck is going to be bandwidth. But, if we decide to compress the blocks as you get them, you can alleviate that, with the cost of decompressing the blocks and so slowing your processing of them. Compression is an NP problem, so the more you compress the blocks, the longer it takes to decompress the data, and this relationship is not linear; 10% more compression requires more than 10% more processing time to decompress. Compressing too much eats up that bandwidth benefit you’re going to get and there’s a point of equilibrium that’s different for each node on the network, based on it’s bandwidth and processing power. Obviously, we cannot compress differently based on each node, so compressing necessarily is a trade off between bandwidth and hardware capability, any compression favors low bandwidth, higher power nodes, and no compression favors higher bandwidth, lower power nodes. Further, your compression scheme cannot compress beyond certain limits without becoming lossy, so there’s a practical limit even ignoring processing time. You also have to consider processing power of the remote node, since it has to compress blocks.