- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Hey all
I wanted to show off my new project, webmesh. It’s yet another solution for creating WireGuard mesh networks/VPNs between multiple hosts, most similar to projects like TailScale/ZeroTier. It differs from others in that there is a controller-less architecture that maintains the network state on every node via Raft consensus. This allows for any node to become the “leader” should one go away.
Github in the link above. More infoz in the README and on the project website: https://webmeshproj.github.io
Excited to hear any feedback :)
Looking good!
This sounds super interesting
Will this work in a situation where all clients are behind NAT? (Specifically cases where the Admin has no control of the NAT, like with CGNAT or clients on mobile networks)
And if it does, how do clients find each other without some central server?
So it will work with clients behind NATs. By default the network is a little different from similar solutions in that not everyone is directly connected peer to peer. The default behavior is to branch off from the server you joined (with traffic to the rest of the network routed through them). Then via the admin API (or configuration/RBAC that needs to be better documented) you can tweak the topology by putting “edges” between devices. If there is no direct connectivity between the devices they will use ICE tunnels to connect. One of the APIs that can be exposed on nodes helps with candidate negotiation, and another one can be a TURN server if you want. Sorta demonstrated here https://github.com/webmeshproj/webmesh/tree/main/examples/direct-peerings but it’s a fake test because it happens on docker networks.
To your second question, there has to be someone accessible currently. But I’ve included an idea of a Peer Discovery API server that devices can optionally expose. In that vein you could have a node that just provides peer discovery and nothing else.
It’s kinda pointless though because the server running the API has to be a member of the cluster already - so in that way they become a “central server”. I want to add more options, such as SRV lookups. Always happy for help and more ideas too :)