Here is a test I did at 10:30pm and 12:30am. Test done between my home and Oracle Cloud Singapore, using non-burstable instance (aka they are expensive but give me full CPU and 2 Gbps network performance).
Baseline at 10:30pm
1.9 Gbps locally in Singapore
https://www.speedtest.net/result/c/d30796bb...6a-6c218b1c6532
2 Gbps to Telekom Malaysia
https://www.speedtest.net/result/c/470331b9...6c-748b4a243310
Baseline at 12:30am
1.9 Gbps locally in Singapore
https://www.speedtest.net/result/c/9105b19a...40-c09e0e475ceb
1.9 Gbps to Telekom Malaysia
https://www.speedtest.net/result/c/8030bef7...9d-1473cff9b7d9
My own speedtest will max out at 800 Mbps / 200 Mbps every single time (using both speedtest.net and ipv6.speedtest.net). I am sure a lot of people have the same experience.
IPv4 Traceroute is the same during both time.
Oracle Cloud to my home
CODE
traceroute to <domain censored> (115.134.181.60), 30 hops max, 60 byte packets
1 140.91.232.46 (140.91.232.46) 0.162 ms 140.91.232.50 (140.91.232.50) 0.136 ms 140.91.232.7 (140.91.232.7) 0.198 ms
2 31898.sgw.equinix.com (27.111.229.44) 1.088 ms 0.792 ms 1.068 ms
3 6939.sgw.equinix.com (27.111.228.81) 25.688 ms * *
4 telekom-malaysia-inc.e0-71.core2.sin1.he.net (74.82.46.50) 2.270 ms 2.247 ms 2.265 ms
5 * * *
6 115.134.181.60 (115.134.181.60) 11.123 ms 14.259 ms 14.256 ms
My home to Oracle Cloud
CODE
traceroute to <domain censored> (168.138.188.163), 30 hops max, 60 byte packets
1 _gateway (192.168.88.1) 0.255 ms 0.232 ms 0.506 ms
2 115.134.191.254 (115.134.191.254) 7.592 ms 7.578 ms 7.564 ms
3 10.55.49.99 (10.55.49.99) 12.407 ms 12.424 ms 12.378 ms
4 10.55.107.93 (10.55.107.93) 9.530 ms 10.55.39.197 (10.55.39.197) 8.166 ms 10.55.107.95 (10.55.107.95) 9.501 ms
5 10.55.100.54 (10.55.100.54) 11.966 ms 11.968 ms 11.908 ms
6 * * *
7 140.91.232.23 (140.91.232.23) 14.586 ms 140.91.232.19 (140.91.232.19) 13.581 ms 140.91.232.31 (140.91.232.31) 14.527 ms
8 * 168.138.188.163 (168.138.188.163) 17.556 ms 17.533 ms
IPv6 Traceroute is also the same during both time.
Oracle Cloud to my home
CODE
traceroute to <domain censored> (2001:e68:5427:3c12:164:5ba5:d348:18dd), 30 hops max, 80 byte packets
1 2603:c000:1100::8c5b:e82e (2603:c000:1100::8c5b:e82e) 0.172 ms 2603:c000:1100::8c5b:e80c (2603:c000:1100::8c5b:e80c) 0.142 ms 2603:c000:1100::8c5b:e82c (2603:c000:1100::8c5b:e82c) 0.122 ms
2 31898.sgw.equinix.com (2001:de8:4::3:1898:1) 0.880 ms 3.299 ms 3.257 ms
3 2001:de8:4::4788:3 (2001:de8:4::4788:3) 2.787 ms 2.799 ms 2.749 ms
4 2001:e68:5427:3c12::1 (2001:e68:5427:3c12::1) 11.058 ms 11.036 ms 10.976 ms
5 2001:e68:5427:3c12:164:5ba5:d348:18dd (2001:e68:5427:3c12:164:5ba5:d348:18dd) 10.866 ms 19.580 ms 19.567 ms
My home to Oracle Cloud
CODE
traceroute to <domain censored> (2603:c024:4512:ef00:3a30:bdcb:ccd9:3e7a), 30 hops max, 80 byte packets
1 2001:e68:5427:3c12::1 (2001:e68:5427:3c12::1) 0.220 ms 0.334 ms 0.381 ms
2 2001:e68:402c:8001::6c (2001:e68:402c:8001::6c) 7.030 ms 7.013 ms 7.673 ms
3 2001:e68::b:4011 (2001:e68::b:4011) 12.609 ms 12.592 ms 12.577 ms
4 2001:c10:80:2::615 (2001:c10:80:2::615) 15.260 ms 15.244 ms 14.782 ms
5 2001:c10:80:2::6be (2001:c10:80:2::6be) 25.246 ms 14.446 ms 14.448 ms
6 2001:c10:80:2::2bd (2001:c10:80:2::2bd) 14.689 ms 2001:c10:80:2::60d (2001:c10:80:2::60d) 10.629 ms 2001:c10:80:2::2bd (2001:c10:80:2::2bd) 12.199 ms
7 2603:c000:1100::8c5b:e812 (2603:c000:1100::8c5b:e812) 14.492 ms 2603:c000:1100::8c5b:e816 (2603:c000:1100::8c5b:e816) 10.595 ms 2603:c000:1100::8c5b:e815 (2603:c000:1100::8c5b:e815) 14.187 ms
8 * 2603:c024:4512:ef00:3a30:bdcb:ccd9:3e7a (2603:c024:4512:ef00:3a30:bdcb:ccd9:3e7a) 14.043 ms 14.020 ms
As can be seen above for IPv6 connection, Oracle exchange traffic directly with TM inside Equinix.
For IPv4, traffic is offloaded to Hurricane Electric inside Equinix, then transit to TM.
Hint: You can know this from XXXXX.sgw.equinix.com. XXXXX is the AS number of the egress peer. This naming convention is specific to Equinix and does not necessary apply to other IXP.
Actual performance at 10:30pm
IPv4
CODE
Reverse mode, remote host <domain censored> is sending
[ 5] local 192.168.88.13 port 36194 connected to 168.138.188.163 port 65535
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 4.35 MBytes 36.5 Mbits/sec
[ 5] 1.00-2.00 sec 13.4 MBytes 112 Mbits/sec
[ 5] 2.00-3.00 sec 17.5 MBytes 147 Mbits/sec
[ 5] 3.00-4.00 sec 12.6 MBytes 106 Mbits/sec
[ 5] 4.00-5.00 sec 12.9 MBytes 108 Mbits/sec
[ 5] 5.00-6.00 sec 13.9 MBytes 116 Mbits/sec
[ 5] 6.00-7.00 sec 17.8 MBytes 149 Mbits/sec
[ 5] 7.00-8.00 sec 13.5 MBytes 114 Mbits/sec
[ 5] 8.00-9.00 sec 12.4 MBytes 104 Mbits/sec
[ 5] 9.00-10.00 sec 17.8 MBytes 150 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.08 sec 139 MBytes 116 Mbits/sec 10334 sender
[ 5] 0.00-10.00 sec 136 MBytes 114 Mbits/sec receiver
IPv6
CODE
Reverse mode, remote host <domain censored> is sending
[ 5] local 2001:e68:5427:3c12:164:5ba5:d348:18dd port 35422 connected to 2603:c024:4512:ef00:3a30:bdcb:ccd9:3e7a port 65535
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 29.1 MBytes 244 Mbits/sec
[ 5] 1.00-2.00 sec 35.4 MBytes 297 Mbits/sec
[ 5] 2.00-3.00 sec 33.2 MBytes 279 Mbits/sec
[ 5] 3.00-4.00 sec 34.9 MBytes 293 Mbits/sec
[ 5] 4.00-5.00 sec 33.5 MBytes 281 Mbits/sec
[ 5] 5.00-6.00 sec 34.3 MBytes 288 Mbits/sec
[ 5] 6.00-7.00 sec 34.1 MBytes 286 Mbits/sec
[ 5] 7.00-8.00 sec 34.8 MBytes 292 Mbits/sec
[ 5] 8.00-9.00 sec 33.5 MBytes 281 Mbits/sec
[ 5] 9.00-10.00 sec 34.4 MBytes 289 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.02 sec 340 MBytes 285 Mbits/sec 13239 sender
[ 5] 0.00-10.00 sec 337 MBytes 283 Mbits/sec receiver
From here you can see IPv6 is actually 2.5x faster when Oracle-TM directly exchange traffic inside Equinix.
Actual performance at 12:30am
IPv4
CODE
Reverse mode, remote host <domain censored> is sending
[ 5] local 192.168.88.13 port 36554 connected to 168.138.188.163 port 65535
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 83.4 MBytes 699 Mbits/sec
[ 5] 1.00-2.00 sec 95.3 MBytes 799 Mbits/sec
[ 5] 2.00-3.00 sec 92.7 MBytes 778 Mbits/sec
[ 5] 3.00-4.00 sec 93.3 MBytes 783 Mbits/sec
[ 5] 4.00-5.00 sec 91.6 MBytes 768 Mbits/sec
[ 5] 5.00-6.00 sec 89.7 MBytes 752 Mbits/sec
[ 5] 6.00-7.00 sec 93.2 MBytes 782 Mbits/sec
[ 5] 7.00-8.00 sec 92.4 MBytes 775 Mbits/sec
[ 5] 8.00-9.00 sec 90.1 MBytes 756 Mbits/sec
[ 5] 9.00-10.00 sec 93.3 MBytes 783 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.02 sec 918 MBytes 769 Mbits/sec 46250 sender
[ 5] 0.00-10.00 sec 915 MBytes 767 Mbits/sec receiver
IPv6
CODE
Reverse mode, remote host <domain censored> is sending
[ 5] local 2001:e68:5427:3c12:164:5ba5:d348:18dd port 48856 connected to 2603:c024:4512:ef00:3a30:bdcb:ccd9:3e7a port 65535
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 31.3 MBytes 263 Mbits/sec
[ 5] 1.00-2.00 sec 34.9 MBytes 292 Mbits/sec
[ 5] 2.00-3.00 sec 34.8 MBytes 292 Mbits/sec
[ 5] 3.00-4.00 sec 35.4 MBytes 297 Mbits/sec
[ 5] 4.00-5.00 sec 33.9 MBytes 284 Mbits/sec
[ 5] 5.00-6.00 sec 33.7 MBytes 283 Mbits/sec
[ 5] 6.00-7.00 sec 35.1 MBytes 295 Mbits/sec
[ 5] 7.00-8.00 sec 34.3 MBytes 288 Mbits/sec
[ 5] 8.00-9.00 sec 34.1 MBytes 286 Mbits/sec
[ 5] 9.00-10.00 sec 35.5 MBytes 298 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.01 sec 346 MBytes 290 Mbits/sec 15149 sender
[ 5] 0.00-10.00 sec 343 MBytes 288 Mbits/sec receiver
After midnight, customer edge is no longer congested and I can get almost full speed, making IPv4 2.6x faster than IPv6.
Result interpretation
My Oracle Cloud instance get full speed regardless of time, to both TM and Singapore speedtest server. However I don't know how to make speedtest-cli run test on IPv6, so they are all IPv4 result.
At 10:30pm, IPv4 performance is 14% of subscribed speed and IPv6 performance is 35% of subscribed speed.
At 12:30am, IPv4 performance is 96% of subscribed speed and IPv6 performance is 36% of subscribed speed.
Since the speedtest between Oracle and TM ran full speed during both time, the only explanation for the slowness is congestion on the customer edge (BNG or OLT). As for why I can get perfect speedtest score at home, the only explanation I can think of is TM QoS to prioritize it.
Unknown
The question then become: Is TM-Equinix permanently congested?
It seems odd that TM do not utilize TM-Equinix link for the egress as both IPv4 and IPv6 transit via SingTel.
On the ingress, the only explanation why IPv4 transit via Hurricane while IPv6 peer directly in Equinix is that TM actually perform some kind of traffic engineering.
IPv4 is better than IPv6 debate
As can be seen here, depending on time, IPv4 can be better than IPv6 and vice versa, on the exact same route, with the exact same endpoint.
Jul 5 2024, 01:35 AM
Quote

0.0243sec
0.28
6 queries
GZIP Disabled