Has anyone done some deep dives on the effects of latency on VPN throughput while connected to GlobalProtect? I am looking into struggles for a handful of users across the country from our gateways. With latencies in excess of 75ms they are struggling to pull more than 30mbits. I know we cannot make light go faster on the wires but is there any tweaks or alterations that might make longer distance, higher latency connections more workable or efficient?
If your app is affected by latency there aren’t a million solutions.
Definitely make sure your users are getting IPsec transport on globalprotect and not the SSL transport.
Consider tuning TCP receive windows if at all possible (sometimes not possible) to force them to 2 megabytes.
If your application can use UDP instead of TCP, it may improve transfer speeds over high latency because you don’t need to wait for TCP acknowledgement anymore.
And then of course: consider moving your app closer to your users, to lower their latency. For example, spinning up a globalprotect gateway in a cloud location that’s proximate to your users and putting an outpost of your app in that same cloud. Maybe not feasible for all apps.
Test using a lower your MTU
Two options I have seen. 1. Turn off server response in the security rule to the share. 2. Then an App override if option 1 does not work. SMB, especially Windows shares are terrible for performance. Web based shares help as well, but not when the client application cannot utilise web shares.
I have a few users whose only ISP option is satellite…. And it sucks. 500ms +! But really nothing I can do for them.
Do they have Tmobile home internet? If so, setup a second agent configuration under the portal config with a lower mtu for those users. 1350 mtu seems to be doing well in our situation.
Bandwith delay product will play a hand if latency is really an issue. TCP window size tunning can be tricky on a windows os.
For users with connectivity to congested ISPs there is no tweak.
If your application is SMB try disabling app inspection basically an override.
Microsoft has SMB over Quic but I am not sure it is available outside Azure.
Try winscp that can be tuned to use multiple connections for a single file transfer.
Last resource make your users RDP to a local vm
Unfortunately these users need access to large files via the tunnel which are only stored on-prem in our data ctr. Even if split 90% of their internet traffic I would still have a problem.
Home Internet, in both cases I have been involved with the user has a minimum of 300mbit service. Once they are connected to our GP we see their speeds (tested using iperf) drop below 40mbps while my home internet within 100 miles of the gateway will test around 200mbps.
I have run into sporadic issues with the MTU (we run 1400 presently) on users who jump onto a wifi hot spot. The user is on spectrum home internet with the bandwidth issues.
As /u/jacksbox suggested, use IPSec for the transport. Have you confirmed which you are using?
It is a checkbox and will be the easiest change you can make that will have some positive affect on your user community.
We are ipsec by default and only failover to SSL if GP detects instability.
What they mean is confirm these users are not seeing fallback due to instability
They are on IPSEC, though I tested both