What causes the randomness of internet speeds, even on Ethernet?

If your service provider is transmitting over fiber or copper (as opposed to satellite) it Can also be affected by how many people are accessing the network in the area that you live in. All of your houses are connected to central nodes. Each node services hundreds of people unless you buy a dedicated line. The more people online, the slower your internet is going to be. Commercial and residential customers are commonly put on the same nodes.

Let’s look at what we’re actually talking about here: the rate of packet transmission and reception and the overall throughput of data varies even when talking to a machine on the same LAN and you want to know why.

First, packet transmission in your local machine is not done on a set timer - it’s done when the process sending the packet is scheduled to execute and interrupts to the CPU can preempt the process.

Secondly, the actual data in those packets has to be generated - and that depends on the application layer. If i’m typing in an SSH connection then there’s some randomness from the rate at which I type.

Thirdly, there’s latencies that come in from the network itself - the ethernet switch is not perfect and if it’s busy there will be increased latency (yes, even on switches - if there’s 1 CPU handling the switching of frames, that CPU has nondeterministic responses because we don’t know ahead of time what inputs it’s going to get).

This is all before you even hit your router and the internet - to get to the internet there’s the transmission medium connecting your home or workplace to your ISP and random stray EM fields that may make packet retransmissions required or that may temporarily lower the bandwidth available depending on the type of link.

Once inside your ISP network, we have all the same issues as your LAN but multiplied due to the 1000s of extra users so there’s even more randomness introduced here.

Then there’s an element of randomness introduced at each router hop to the destination because of other traffic and then at the destination itself the server may be busy or less busy from moment to moment as well as the server’s network.

On the way back to you, packets from the server may take totally different routes depending on the hops inbetween your ISP and the server’s ISP and all the same randomness comes into play again.

Put simply: every step of the process has some element of randomness and it all adds up so that moment to moment your throughput as seen at the application layer will always have some variance. The more steps involved, the greater the variance.

To some degree, QoS implementations can reduce this variance - but it’s physically impossible to eliminate it completely. Even factors like the physical movement of harddrive heads and random stray EM fields effect the overall data rates.

It all depends on which side youre seeing the bottleneck, unfotunately now days there are so many variables in determining internet speed that it makes it very hard to find out.

From the source you have the actual file you are accessing - Is it hosted on a link faster than you can consume? And what is the latency from you to this node? Then you have links and routing, how does your client get to the destination? do you have to traverse the pacific ocean to get this file? Then you have ISP issues, there are usually many ISP’s involved here, there is sometimes a wholesale ISP then the retail ISP, so the problem could be anywhere with those guys…THEN they have to get the data from the ISP to your phone/fibre/coax point in your house…what step downs and media converters are required here? Are you on DSL with a max of 20mbps? if so, all of the ISP infrastructure would have to be PERFECT and lossless to get you that speed, as well as you being close to the telco exchange.

THEN last of all, a huge number of people have wireless in their homes, wireless, bar none, is the lossiest way of transmitting lots of data, it will increase latency to the destination by at least a few ms…

So after saying all that, the reason for random speeds even over ethernet could be any number of the things i mentioned earlier, all it takes is one of those things to not work well and bam… slow speed.

Internet speeds are random because of the structure of the internet. When you connect to something on the internet it could be millions of miles away. Your information must travel through hundreds of different routers/switches/Internet service providers (ISP). If any one of these ISPs have any sort of congestion then your overall speed is affected. Just like how traveling during the Holidays has a higher chance of taking longer because of the amount of traffic, the internet works the same way. During times where more people are using the internet, the higher the chance for some slow downs to occur.

There are also additional factors such as the technology your local ISP is using. I can go more in-depth of the impacts of your local ISP if anyone would like me to.

There are little trolls that carry your internet through those pipes they put in the ground. Sometimes the trolls get lazy and take a nap, then the other trolls have to jump over that one and run around him, which really slows things down and mucks stuff up.

The simple answer is that servers are providing the data at different speeds. The variation is upstream where resources are being competed for, where bandwidth and hard drive outputs are being maxed out and shared between you and everyone else requesting data from the same servers and surrounding infrastructure.

So many factors are at play. But on of the largest I know about is that most new traffic is not content its done in bursts. This can lead to hard to guess usage and thus hard for ISP to handle loads and predict upcoming usage.

I’m taking a course on Computer Networking and wow this question gave me some crazy flashbacks to protocols and diagrams from class. The internet is a lot more complex than we think, and a great (and very readable textbook) that has many more sources if you are interested is Computer Networking: A Top-Down Approach by Kurose and Ross

For me it was a wayward tree branch that hit my line coming into the house, but also check for water drops on any part of the line and or think about replacing wiring or better yet hook the (DSL?) directly into the test jack on the phone.

I have some experience with this working for Verizon landline. There are several things that can cause issues with a traditional DSL connection.

  1. Wires that are ran from the origin office are primarily made of copper. Copper degrades over time and effects speed.
  2. The copper is insulated in really old and outdated materials. These materials degrade and rain and humidity effect the wires and slow the speed.
  3. Copper wire is a great conductor but as we learn in science the longer = more resistance. Speed suffers.
  4. Verizon and many more companies are switching to fiber optic cables. Internet speed it increased drastically because it is sent via light transmission. Anytime that this degrades a device will amplify the light and send it on. The best companies send a fiber cable directly to your house and can guarantee 98℅ speeds of what your plan is. The db companies just use fiber until it gets to your neighborhood and then taps into the existing network and can’t guarantee the speed(Comcast). This cuts costs and helps the companies(Comcast) make lots of money even though it is not a true fiber network. Fiber is extremely expensive because the fibers are made of really fine and long strands of glass. Eventually everyone will switch over to complete fiber but I’m not sure if Wifi broadcasting will beat them to it.

Hope this helps

I haven’t seen it mentioned, so I’ll add my little bit:
Just about all of the traffic you are getting is coming in through a CDN like Akamai or Limelight. The data is cached near you to reduce how much has to flow over long haul. To work this, they use anycast DNS addresses. When you type in cnn.com, that goes to any number of DNS servers that are close to you (perhaps run by the CDN) and they return an address that is close to you (or, if there’s lots of congestion, they might return an address somewhere else that bypasses it). Sometimes, the DNS is slow (and sometimes broken). Sometimes, the cache servers aren’t primed, so you’re getting the page from far away. Sometimes, there’s a router between you that is congested due to an event (for example, people streaming the superbowl). It’s not always TCP that’s at fault.

Work backwards… your Internet connection is one of many operating on a node. The node has limited capacity so less people connected = better service, more connected = worse service. People getting home from work will cause the quality to go down.

Node goes back to a central office / head end and this happens until you get back to a major data center. All of this leads to variable service.

Look at the Internet and what you are trying to get to. Where are those severs, who do they use as an ISP and do they have enough capacity for the demand at that moment.

So many variables…

Through all of these explanations of routers, switches, QoS, congestion mechanisms, buffers, etc. don’t forget that slowness can be caused by good, old-fashioned server overloads. It’s not as likely, but it can happen. If Netflix doesn’t add servers fast enough, for example. Or Amazon. Etc.

My classic example of this is when the lottery numbers are drawn. Servers tend to be slow because everyone is hitting them trying to find out if they won. I recall many years back trying to hit the Powerball site and not being able to get through, or getting partial Web page returns, simply because the door was way overloaded. So whenever anyone asks me about Internet speed I make a point of including this in the conversation. I emphasize you could have a 100 Gb connection and in that case it won’t make a hill of difference.

With technologies to dynamically spin up servers at a moment’s notice it’s not nearly the problem it used to be for major services, but don’t discount it for smaller shops and sites.

Your internet provider sucks if your speed is random. You should be getting your advertised speed all the time. If you’re not call your ISP and have them send a tech out. Repeatedly if needed. If they can’t fix it after several visits consider contacting the FCC, but consider they might cut you as a customer. Whether that is illegal or not I’m not sure.

Clock cycles. Link speed. Link type. Packet degradation. Network congestion. Type of resource. Media. Protocol. Routing. Quotas. Type of network. Distance between nodes. Monitoring tools. Location of monitoring tool.

Take your pick of any or all of the above, add a few dozen other issues related directly to your PC, then an unknowable number of others if you are on an enterprise domain (work/school).

The question you asked is seriously complex with too many variables to account for in such a vaguely worded question.

I’m a software developer. Clasically trained C++, now in C# blah blah blah. I don’t know a damn thing about how data transportation works, but I understand a lot of the technicals of computing. Any good resources to learn more about this?

For example- I don’t really understand the term ‘jitter’. Buffer, latency, bloating, bursty- I get these terms. But the wikipedia page for Jitter is just filled with more jargon I don’t understand.

I have the problem you mention for evening issues. Not only does the internet slow down, but packets are lost so that web pages don’t load correctly. Speed test websites don’t load correctly. Ping works just fine.

Is there a good way to graph when my connection goes bad? I could setup a loop with ping, but ping works during the bad periods, so that won’t tell me anything.

I recently learned (from lwn “Making WiFi fast” By Jonathan Corbet
November 8, 2016) that bufferbloat is being actively worked on. e.g. https://www.bufferbloat.net/projects/bloat/news/

Really useful info for sure. Thanks! If you ever have the time definitely come back and post an edit with more information on those last topics you mentioned.

One reason for the many packet losses are intermediate routers in the network (middleboxes). Sometimes they have too large buffers which is a weak spot of the currently used TCP implementations. An engineering team from Google contributed a new version of TCP that should reduce the data rate fluctuation caused by too large buffers and lags in packet transmission. Of course that’s not the only source of congestion/packet loss but is hoped to have a big impact to overall internet stability. The patch should be in Linux Kernel 4.9

Here a graph that shows how fluctuation is reduced compared to other TCP implementations.

More details on the BBR algorithm