The slow link from South East Asia to US
I have tested this problem affect Malaysia and Singapore as a whole. If you have CDN in Malaysia / Singapore, but Origin server in the US, transfer will be very slow and almost certainly timeout.
This investigation started because I have a lot of problem working on Fedora, a flavor of Linux distro.
A little background:
There has been a lot of fights between Tier 1 and Tier 2 providers in EU and Asia lately, all of them involving Cogent. In the beginning of the year, Cogent de-peer NTT in EU, causing all EU traffic to peer in the US, increasing latency even if you are in EU. Example, two different AS in Amsterdam must peer in US, even when they are physically in the same city.
Last month, Cogent de-peer TATA in Asia, cutting off many single-homed customers from each other.
Submarine cable breakage towards EU and Japan also adds to the problem.
Example:
The website is fronted by Cloudflare or AWS CloudFront, and pings are fine. But when you actually starts transferring bulk data, speed quickly drop to less than 10KB/s and depending on how much data is being transferred, you will get a timeout. This happens regardless of the location of CDN either in KL, Johor or Singapore.
In Fedora's case, they are fronted by AWS CloudFront but their Origin is physically in the US.
Observation:
Anecdotes suggest the link will mostly be fine between 4am to 11am Malaysia Time. As you can see, I don't have time to wait for the stars to align.
My solution:
Since I have built my own anti-censorship "VPN", I use it to solve my problem. In AWS, all inter-AZ or inter-region traffic will be routed via what is known as "cold potato routing".
So I spin up one EC2 instance in Oregon and CloudFront it. I then connect to CloudFront KL, which will cold potato route me to my EC2 in Oregon.
The end result:
Hot potato routing Oregon -> TM
CODE
[ec2-user@i-063f1764f181fb8c3 ~]$ tracepath 115.134.191.254
1?: [LOCALHOST] pmtu 9001
1: ip-172-23-48-1.us-west-2.compute.internal 0.083ms pmtu 1500
1: no reply
2: 240.0.8.65 0.290ms
3: 242.0.65.69 4.915ms asymm 6
4: 240.1.228.15 9.178ms asymm 5
5: 242.4.195.71 6.556ms asymm 18
6: 150.222.214.74 6.814ms asymm 19
7: as6939.10gigabitethernet9-9.core1.sea1.he.net 9.802ms asymm 24
8: no reply
9: as6939.100gigabitethernet2-3.core1.sjc2.he.net 23.471ms
10: no reply
11: telekom-malaysia-inc.e0-2.core3.sjc2.he.net 248.402ms asymm 17
12: 115.134.191.254 235.339ms reached
Resume: pmtu 1500 hops 12 back 17
Hot potato routing TM -> Oregon
CODE
tracepath ec2-18-246-47-189.us-west-2.compute.amazonaws.com
1?: [LOCALHOST] pmtu 1500
1: _gateway 4.103ms
1: _gateway 6.127ms
2: _gateway 2.176ms pmtu 1480
2: 115.134.191.254 11.807ms
3: 10.55.49.97 41.665ms asymm 6
4: 10.55.100.228 9.979ms
5: 10.55.209.61 11.560ms
6: ix-hge-0-0-0-22.ecore1.esin4-singapore.as6453.net 63.331ms asymm 16
7: if-bundle-33-2.qcore1.esin4-singapore.as6453.net 276.803ms asymm 16
8: if-bundle-19-2.qcore2.svw-singapore.as6453.net 195.879ms asymm 13
9: no reply
10: no reply
11: if-et-24-2.hcore2.kv8-chiba.as6453.net 121.413ms asymm 12
12: no reply
<----- snip ----->
30: no reply
Too many hops: pmtu 1480
Resume: pmtu 1480
Cold potato routing EC2 Oregon -> CloudFront KL
CODE
[ec2-user@i-063f1764f181fb8c3 ~]$ tracepath 18.68.55.119
1?: [LOCALHOST] pmtu 9001
1: ip-172-23-48-1.us-west-2.compute.internal 0.070ms pmtu 1500
1: no reply
2: 240.0.8.65 0.321ms
3: 242.0.64.71 1.648ms asymm 6
4: 100.100.22.74 0.516ms asymm 6
5: 150.222.246.46 166.174ms asymm 22
6: 240.5.48.12 165.231ms asymm 22
7: 240.5.48.23 164.835ms asymm 22
8: 240.65.60.195 164.776ms asymm 22
9: no reply
<----- snip ----->
30: no reply
Too many hops: pmtu 1500
Resume: pmtu 1500
The ping! (Cold potato routing)
CODE
[ec2-user@i-063f1764f181fb8c3 ~]$ ping -c 10 18.68.55.119
PING 18.68.55.119 (18.68.55.119) 56(84) bytes of data.
64 bytes from 18.68.55.119: icmp_seq=1 ttl=234 time=165 ms
64 bytes from 18.68.55.119: icmp_seq=2 ttl=234 time=165 ms
64 bytes from 18.68.55.119: icmp_seq=3 ttl=234 time=165 ms
64 bytes from 18.68.55.119: icmp_seq=4 ttl=234 time=165 ms
64 bytes from 18.68.55.119: icmp_seq=5 ttl=234 time=165 ms
64 bytes from 18.68.55.119: icmp_seq=6 ttl=234 time=165 ms
64 bytes from 18.68.55.119: icmp_seq=7 ttl=234 time=165 ms
64 bytes from 18.68.55.119: icmp_seq=8 ttl=234 time=165 ms
64 bytes from 18.68.55.119: icmp_seq=9 ttl=234 time=165 ms
64 bytes from 18.68.55.119: icmp_seq=10 ttl=234 time=165 ms
--- 18.68.55.119 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9010ms
rtt min/avg/max/mdev = 165.044/165.088/165.146/0.026 ms
The ping! (Hot potato routing)
CODE
[ec2-user@i-063f1764f181fb8c3 ~]$ ping -c 10 115.134.191.254
PING 115.134.191.254 (115.134.191.254) 56(84) bytes of data.
64 bytes from 115.134.191.254: icmp_seq=1 ttl=239 time=249 ms
64 bytes from 115.134.191.254: icmp_seq=4 ttl=239 time=263 ms
64 bytes from 115.134.191.254: icmp_seq=5 ttl=239 time=253 ms
64 bytes from 115.134.191.254: icmp_seq=6 ttl=239 time=229 ms
64 bytes from 115.134.191.254: icmp_seq=7 ttl=239 time=246 ms
64 bytes from 115.134.191.254: icmp_seq=8 ttl=239 time=262 ms
64 bytes from 115.134.191.254: icmp_seq=10 ttl=239 time=229 ms
--- 115.134.191.254 ping statistics ---
10 packets transmitted, 7 received, 30% packet loss, time 9159ms
rtt min/avg/max/mdev = 228.644/247.148/263.060/12.963 ms
Quantifiable improvement:
Packet loss reduce from 30% to 0%. Infinite improvement!
Latency reduce by 82ms, a 33% improvement.
Jitter reduce by 99.8%. Gamer rejoice!
This post has been edited by kwss: Jun 17 2024, 09:48 PM
Jun 17 2024, 08:04 PM
Quote
0.0193sec
0.34
7 queries
GZIP Disabled