What causes the randomness of internet speeds, even on Ethernet?

This is becoming less an issue, but Quality of Service (QoS) filters can cause issues on low quality or over burdened network equipment. In theory QoS is a good thing, as you don’t want your phone service (that also uses your internet) to go down just because you are saturating your network connection, but many consumer grade devices weren’t designed with tons of connections in mind. If you have several people on your home network running bit torrent clients for example, it can quickly overwhelm low end equipment (not enough ram, speed, or cooling), causing packet loss or overheating and freezing as the software just can’t keep up with the demands of what the users are throwing at it.

What is the difference between tcp and udp?

Will we ever see this eliminated for good?

Is this really the cause of observable variation though? When TCP is increasing its congestion window, and hence data rate, this is happening phenomenally fast; dropping and re-increasing the rate also happens very fast; if you’re transmitting large TCP packets at 20Mbps, you’re transmitting a packet roughly every 600 microseconds. Most displays of data speed only update every second or so, and have to average the data transmitted over the last second - as long as the maximum throughput of the link remains roughly constant, it seems to me that this will all happen far too fast to observe any difference in that average.

Thus fluctuation should occur from other factors than TCP, though of course TCP’s transmission rate will vary along with their variation.

And sometimes the reason for the loss event could be a disk being accessed on the server or client for purposes other than the data transfer being monitored. Other factors include the overall capacity of the network or switch hosting the connection and any latency from the ISP. Sometimes you may have what seems like a gigabit connection between devices, and that may be true, but if the server suddenly takes on an extra client you’re going to be sharing the server’s gigabit connection between that client as well. Basically, the fluctuations you see in network monitoring is the TCP protocol constantly trying to find the fastest possible speed data will transfer.

How does this relate to the randomness in internet speeds?

Shouldn’t there be a better way for us to send/recieve data instead of having these «waves»?

So that it knows when it reaches maximum and then just cut the growth in datatransfer/second, making the transfer speed constant?

Interesting. No one has devised a protocol to cut to say 85% instead of nothing? And then increase linearly?

Does CPU overhead have anything to do with network traffic as well?

I always wondered why when downloading large files I would get up to full speed then suddenly the speed would crash, then work back up to full, then crash.

Would it be possible to make TCP not do this?

This is known as windowing right?

what the heck is ethernet?

This isn’t really the cause of varying speeds. That’s generally caused by too many hops or congestion on one of the routers between you and the end server.

Sounds like you’re referencing MTUs but those are fairly standardized once you leave a local network and head out past your ISP.

I’m not a fan of this answer as it really only gives one cause and then gets into net neutrality. A big reason for variability in your download is sharing the bandwidth along the route. The other traffic is not constant, so the amount you have access to along the route varies constantly. You can have dedicated/reserved bandwidth to prevent this, but that costs far more. Generally, when you have a residential line of X mbps, you only have that connection up to a point at your ISP, from there you are sharing available resources. With cable you are even sharing an availability with your neighbors on the same node.

which may be literally anywhere in the world (or in orbit).

Disagree with this bit, routing protocols wont allow that to happen unless something goes really, really wrong on a global scale with the backbones. Generally the routes taken will be geographically between you and your destination, with a little wiggle room for local oddities.

. A free market does not take precedence over free speech.

There are reasons for net neutrality, but thats not it; net neutrality has little whatsoever to do with free speech.

It in fact has EVERYTHING to do with free markets, and the way in which pay-to-play / throttling creates artificial market barriers that prevent the free market from working.

Can we get an explanation that doesn’t jump on a soapbox?

This would be a great answer for ELI5, but lacking a bit of depth. Real-world analogies tend to fall apart on the internet. :slight_smile: Cheers.

They are both rates/speeds. The key terms to distinguish the two are “latency” and “throughput”

Not true, especially Ethernet is random. It all comes down to how data are transmitted over Ethernet: If a client sees the line is clear, he will send his data. If any other client also sends its data at the same time, he will chosse one of two time frames it will be send again - if it fails it will double the times frames from which the client chooses until it was able to sent its data. This is just random and even given two clients if both are unlucky, could take some time frames until the data is send (allthough for two clients it is unlikly to end up being noticeable, but if many clients are there, many transmission tries can fail)