Assume you have 10mbps speed. So why when you download something from the internet, your speed is not going to be fixed at 10mbps, but instead it varies from 2-9mbps while you're downloading? Why do internet speeds work like this? For example, hardware components like GPU and CPU run at fixed numbers in full speeds, so why don't network speeds work the same way?
-
What physical connections are involved in your example? Are you downloading over Wi-Fi or Ethernet? Is your ISP connection via fiber or coax or wireless LTE? – u1686_grawity Feb 01 '21 at 10:01
-
@user1686 Ethernet via FTTH In a perfect ideal enviroment. – Meilsa Feb 01 '21 at 10:03
-
71Because there's an *internet* in the way. – hobbs Feb 01 '21 at 18:03
-
83Why does a traffic jam not move at the posted speed limit? – J... Feb 01 '21 at 21:06
-
44Just to note, CPUs and GPUs **don't** run at a single fixed speed. The OS adjusts the CPU clock based on demand in order to save power (a "4 GHz" core might go as low as 500 MHz), and the CPU itself may throttle based on its temperature in order to avoid overheating. – u1686_grawity Feb 01 '21 at 21:16
-
2@I'mwithMonica: Relatively speaking, yes, but it _is_ around 20 years old by now (my Dell C840 from 2002 had basic two-speed scaling) and definitely old enough to have its spot in general computering knowledge. – u1686_grawity Feb 02 '21 at 07:28
-
2@user1686 Apart from lower-frequency P-states and throttling (neither of which should matter for a busy core, which is the closest equivalent of OP's saturated link), any modern CPU also relies on "boost states", dynamically scaling the frequency many times a second to make best use of the current power and thermal margins. Thus even if you run without OS-based power-saving frequency scaling and with optimal cooling, the frequency printed on the box still doesn't tell you much about how fast the CPU is operating at any given time. – TooTea Feb 02 '21 at 09:51
-
16" hardware components like GPU and CPU run at fixed numbers in full speeds" and do you get a constant frame rate from them? – EarlGrey Feb 02 '21 at 11:09
-
4If you traveling from one place to another, and you're using public roads, you might not be able to drive your vehicle at the maximum speed that you're legally allowed. This might be because of multiple reasons (traffic, vehicle horsepower, roads, etc.) The same thing applied to here. – DxTx Feb 02 '21 at 11:30
-
@TooTea: But you can easily display CPU frequency & load (and many other things) with a monitor such as Conky. In any case, CPU speed & load have very little to do with network speed. – jamesqf Feb 03 '21 at 03:42
-
1@I'mwithMonica My first computer 30-ish years ago already had a turbo button. – Mast Feb 03 '21 at 17:43
-
50 years algo the telco companies were sure that the only model adequate for networking was the leasing model where each connection reserves resources along the route and those resources can not be affected to anything else until the connection finishes. This enables to give guarantees as resources are reserved at the drawback of poor use and need to overprovision. Then later TCP/IP won at proving that chunking connections in packets and multiplexing them works good enough. With the drawback that almost every resource is shared, with rare/poor Quality of Service management. – Patrick Mevzek Feb 19 '21 at 15:24
7 Answers
The connection between you and your local ISP mostly does work at a fixed connection speed. The main problem is that you are competing with other people on the internet for access to resources.
Your ethernet connection on a local network will be at a fixed 100Mbps or 1gbps, transfers to or from another machine on your local network will be at that speed. If the speed drops then likely it will be due to one or both machines trying to do something else at that time, either seeking for something else on the drive, or the CPU is busy elsewhere. On a mostly idle machine you will get nearly full speed for bulk transfers. Small files being transferred hit limits of latency, you can transfer small chunks of data faster than they can be processed (seek, read, write, etc) at either side.
The internet has similar problems, but you are also competing with other users. They all have demands from servers, they all use the same pipes for different amounts of time to transfer different amounts of data.
Your ISP may well have a different path to a server than your neighbour using a different ISP, their path across the internet may be faster or more efficient. Your path to somewhere else may be better than theirs. The paths may be constantly changing. The internet is more of a live changing network that detects bottlenecks, works around holes and dropouts, seeks out the current best route and only guarantees that data will get to its end point, not how it will get there.
Speed varies as demand, routes and environment varies. You have no control of the data once it has left your router or modem.
Wifi is subject to a lot of environmental noise such as baby monitors, headsets, other wifi networks and so on, it's speed might claim to be high but can be somewhat unpredictable from one moment to the next.
- 89,133
- 25
- 207
- 233
-
20The Internet just _tries_ to get your data there; there are no guarantees. – chrylis -cautiouslyoptimistic- Feb 01 '21 at 22:13
-
9Also, whatever you're downloading gets split up into a bunch of packets. The sender will fire off packets whenever it can (but it may be sending packets to a bunch of other people at the same time - how many users do you think are Googling at any given second?). The packets get sent to you, but there is no guarantee that they all take the same path, or even that any particular packet gets to you. (But there are timeouts and re-send mechanisms built in, so it should all get to you eventually.) It's one of those things that seems simple, but isn't. – jamesqf Feb 02 '21 at 02:49
-
1Well, technically the sender isn't sending whenever it can, but according to a formula that tries to adapt the sending rate to the maximum capacity sender and receiver. The sawtooth-like variation you often see happens because the sender increases rate until packets start dropping, then drops the rate and starts increasing it again. – ojs Feb 02 '21 at 08:27
-
@jamesqf: I think most multi-path systems _try_ to always choose the same path for packets belonging to a given connection, to avoid possible packet reordering (which would cause issues for TCP). Some of them hash only the IP address pair, others include the TCP/UDP port pair as well, but in general it's not random. – u1686_grawity Feb 02 '21 at 10:48
-
3@user1686: Packet reordering is a minor problem for TCP. It has explicit sequence numbering, so clients can put the packets back in the right order,. And if a TCP packet does get lost, TCP also allows the receiver to request a resend. That is yet another cause of the speed variation. – MSalters Feb 02 '21 at 10:53
-
@user1686: Try, yes, because that's more efficient. But there's no guarantee, and indeed, the internet was originally designed around the premise that it wouldn't always be possible. – jamesqf Feb 02 '21 at 23:19
Your local connection has a max speed of 10 Mb/s.
But what about the source of the data on the other end of the connection? That can be faster or SLOWER than your connection.
If it is slower there is no way it can deliver the data at 10 Mb/s to you.
(Bear in mind that world-wide many types of Internet connection are asymmetrical. Download is a lot faster than upload for most users.)
In addition to that there are also bottlenecks in the internet at large, outside your (or your ISPs) control.
So your ISP can offer you a max speed, but your actual speed depends on a lot more factors than that and will in general be lower than your theoretical maximum.
I happen to have a relatively fast Docsis cable connention with 500 Mb/s down and 40 Mb/s uplink. You can tell from that already that although I can in theory download at 500 Mb/s I can only upload at 40 Mb/s. So if I send something to my neighbor (also on such a connection with the same ISP, so minimal interference from any other stuff) he will only receive it at 40 Mb/s because I can't deliver any faster than that.
In fact: Even though I can in theory download at 500 Mb/s I rarely manage to get more than 300 Mb/s. I have to work hard for that using multiple computers in parallel all downloading various things at the same time. That is mainly just because the various services on the Internet that provide the downloads have their own upload speed limits.
Some of these limits are hardware/ISP dependent on their end. Others are software controlled on their end because they don't want a single customer with a fast download hogging all the available upload bandwidth on their end end leave nothing for other customers. So they cap the upload available to a customer to a reasonable maximum.
- 29,601
- 7
- 52
- 84
-
2I find that the limit of servers is usually the bottleneck as most downloads don't even approach 100Mb/s. This is clear if you have ever tried to download an iso, it is impossible to download without torrenting to get around the provider's limits. A 4GB iso maybe takes a few minutes via torrent whereas it takes hours, possibly even days if I were to just download via conventional methods. – gsquaredxc Feb 01 '21 at 21:43
-
2@gsquaredxc I prefer torrens (if possible) as well. Not only much better throughput because of the many sources in parallel, but also torrents have error-checking and correcting as you go. With https(s) and/or ftp you need to do checksum after download and if it is bad you can do the whole download again and have to hope it comes through OK the second time. – Tonny Feb 01 '21 at 23:42
-
2@gsquaredxc: That's probably a matter of your ISO sources. Free Linux download servers will not have a lot of bandwidth, that comes with the price. Steam games will routinely download faster than that, but Steam is a commercial service. – MSalters Feb 02 '21 at 10:57
It depends on several factors. In some cases you will see fairly static speeds, e.g. I had a very stable (if miserable) 8 Mbps download over ADSL at home, and when I copy files at work I typically see nearly flat graphs at ~980 Mbps.
Some connection types have a fixed rate, e.g. an Ethernet connection negotiates 1 Gbps once and sticks with it. However, other connection types – such as those running over radio, or power lines, or other "less than reliable" media – automatically adjust their link rate depending on the environment, e.g. the signal strength, packet loss, and/or the number of corrupted packets.
So if you're using Wi-Fi, the link rate can rapidly drop as people walk around and absorb your signal; and even in stable conditions it still won't stay static as your device occassionally probes higher rates, decides they're not good, returns back. (See the "Minstrel" algorithm for a widely used example.) The same applies to LTE and other "wireless ISP" connections.
Many connections are oversubscribed. For example, in an office, even if you have a 1 Gbps Ethernet port personally, it might go into a switch which then shares just a 1 Gbps uplink for the entire office. So if your neighbour also starts a large download, this will cause your download rate to suddenly halve as the two of you have to share the single gigabit link. Similarly, in FTTH, it could be that you have 50 neighbours all downloading games or watching 4K Netflix over an oversubscribed uplink – each of them getting a proportion of the total available speed. As their usage changes (e.g. video stream stops), the proportion available to everyone else also changes.
The same can occur at any point – it could be that the server is trying to squeeze 200 downloads through its uplink, and it could be that the connection between two ISPs is getting congested during this time of day. So if hundreds of customers are downloading the same thing over the same 10 Gbps connection, they will all see varying speeds as connections come and go and the proportion of the link that each user gets keeps changing.
Downloads over TCP use a congestion control algorithm to make sure the sender doesn't just flood the network with data, but sends it at a rate which the receiver can accept. Most of the commonly-used algorithms will reduce the transmission rate upon seeing packet loss, then slowly ramp it up again. Some servers could be using an outdated or mistuned algorithm which overreacts and reduces the transmission speed much more than it needs to.
(Sometimes the opposite happens and the congestion control algorithm doesn't react correctly, e.g. "BBR[v1] does not back off if packet loss is detected. But in this case the packet loss is caused by congestion. Since BBR[v1] has no means to distinguish congestion related from non-congestion related loss, point (B) is actually crossed, which can lead to massive amounts of packet loss")
- 426,297
- 64
- 894
- 966
As the late Senator Stevens said, The internet is a series of tubes.
"Ten movies streaming across that, that Internet, and what happens to your own personal Internet? I just the other day got... an Internet was sent by my staff at 10 o'clock in the morning on Friday. I got it yesterday [Tuesday]. Why? Because it got tangled up with all these things going on the Internet commercially. [...] They want to deliver vast amounts of information over the Internet. And again, the Internet is not something that you just dump something on. It's not a big truck. It's a series of tubes. And if you don't understand, those tubes can be filled and if they are filled, when you put your message in, it gets in line and it's going to be delayed by anyone that puts into that tube enormous amounts of material, enormous amounts of material." https://en.wikipedia.org/wiki/Series_of_tubes
While this speech was mocked relentlessly at the time, It's not a bad metaphor.
If we consider the old myth about municipal sewage being overwhelmed at Super Bowl Half time shows, it even helps.
You are competing for a scarce resource. And if everyone else is competing at the same time, latencies build up. If more data is being transferred than the tube can handle, it has to back up.
In a very simple example, we have a router, and a cable. If you are the only person accessing that network, the router will route your packets as fast as possible -- utilizing the entire bandwidth of the cable. But when your room mate logs on to do his "research" Now you're sharing that cable, and the router will only give you the cable for 50% of the time, allocating the rest of the time to your roommate. And your apparent speed is cut in half.
- 259
- 1
- 6
The links that make up the internet generally run at fixed data rates. But all of the links on the path between you and whatever machine you're connecting to (except possibly the first and last ones) are shared between hundreds or thousands or millions of other users, who all have their own things they want to do on the internet. The second-to-second behavior of all of those other users is unpredictable, and sometimes packets arrive faster than a given link can handle, and have to be queued or dropped, which affects the speed of the end-to-end connection.
It's like traffic — the width of the road doesn't change, and the max speed of your car doesn't change, but how long it takes to go somewhere still varies depending on how many other people are trying to use the roads along your way.
- 1,276
- 8
- 11
One issue is that latency and reliability interacts with bulk transfer.
While analogies to car traffic make a certain degree of intuitive sense, car traffic is different in that cars not packets traveling at close to the speed of light between routers, which have to be acknowledged by cars going in the opposite direction so that more cars can be deployed.
Roads also don't simply drop cars when they become congested. E.g. if you can't merge onto a certain highway, you are not pulled off the road and sent to the scrap yard.
Why a transfer speed might drop is, for instance, that some packets have been lost, requiring retransmission. There is a sliding window protocol which can smooth over some of these effects. If a packet is only rarely lost, and the sliding window makes forward progress while a retransmission back-fills it, it may not be noticeable, but if there is a more frequent outage, then the retransmissions can visibly stall forward progress.
The TCP protocol is not designed for optimal transfer speed. The design is hedged in order to make it "nice" toward the network, and reduce congestion. The algorithms in TCP sacrifice performance for the benefit of the network.
If there is congestion, causing packets to be lost and delayed, then simply trying harder and sending more of them into the air could make things better for a single connection, but if every connection starts doing that, the congestion will spiral out of control.
One congestion control mechanism is window scaling. TCP starts with a small window (which is bad for obtaining throughput on fast network with high latency). This is called the "slow start". Under the right condition, the TCP window scales up: grows larger, so that the sender can send more packets and more bytes without receiving an acknowledgement for previously sent material; the data transfer gets faster. Network conditions can curtail window scaling; basically the TCP protocol has to see solid performance: what it's sending is being received and acknowledged. If there are hiccups in the network, he window can scale back down again, and the throughput will drop, and stay slower for some time until conditions improve.
- 2,631
- 1
- 18
- 23
Using the Internet is kind of like playing a game on your PC.
When the game, or graphical scene is relatively calm, you will get high FPS. When the game starts getting busy with activity, or complicated graphics and lighting effects, your FPS will go down. Depending on how intense things get, and how inexpensive your equipment is, your FPS might drop so significantly that you get really bad lags or at worse, you might drop out of the game altogether.
It's a very similar situation with all the data that is being processed and passed around over the Internet. Nothing about it is really fixed, only rate-capped for your service level. It's all variable, depending on what's happening across the networks at any given moment.
- 11
- 2