The last entry in the list *is* technically slower, because it's not actually a fancy object storage frontend, it's served by a plain old nginx web server on real hardware, and it has teeny tiny throughput because...
Code:
[ 15.660441] e1000e 0000:00:19.0 eno1: NIC Link is Up 100 Mbps Full Duplex, Flow Control: None
Which makes it ten times slower than my home network ^^

.
The others are actually OpenStack frontends, and should offer better throughput.
(e.g., you can check in a browser, they all have file listing enabled: if it's pretty and naturally sorted, it's nginx, if it's dog-ugly, it's OpenStack

).
(There's a bit of a tradeoff: CloudFlare/OpenStack is *terrible* at range requests, but offers us essentially unlimited bandwidth and fairly decent PoP coverage. nginx is *awesome* at range requests, but my own server has very limited bandwidth).
Granted, since the move to zsync2, that's less of an issue than before. That was in fact one of the main goal of the whole thing, being able to get decent speeds out of the "proper" master/mirrors

.