Running your own NAT64 gateway

Posted on February 24, 2025

In my last post we explored how to reduce IPv4 costs in the cloud by using 464XLAT and a public NAT64 gateway. If you’re going to rely on this for important traffic, you may prefer to run your own gateway. This post is part 2 in the 464XLAT series and will show you how. If you haven’t read part 1, I recommend checking that out first as I’ll be skipping much of the background and terminology.

Where to run your gateway

You will need a server with a routed IPv6 block that is a /96 or larger. A /64 is common, but it’s important that it be routed and not on-link. When a block is routed, the server receives traffic for all IPs in the block without having to do anything special. In contrast, when a block is on-link, the server needs to answer a neighbor solicitation request for each new IP that receives any traffic. That adds latency and presents a scalability problem for the provider, whose router needs to cache a large number of neighbor advertisement responses. A routed block is therefore preferred.

It can be unexpectedly difficult to find a provider offering routed IPv6 blocks. Many advertise a /64 without further qualifiers, and if you sign up, you’ll find that it’s on-link. For this demonstration I’ll use Google Compute Engine, which routes a /96 to each VM. As of February 2025, you can do this on the free tier and pay only for network bandwidth. Some other providers known to work are Linode (but you need to request a /64 from their customer service), Vultr (but only via their reserved IPs feature; the default /64 included with the VM is on-link), or BuyVM (this one’s easy; every VM includes a routed /48).

Setup

Prepare the environment

To start off we’ll need a dual-stack VM. I’ve created one and verified that it has connectivity on both IPv4 and IPv6.

root@decagon:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 42:01:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
    altname enp0s4
    inet 10.0.0.2/32 metric 100 scope global dynamic ens4
       valid_lft 2750sec preferred_lft 2750sec
    inet6 2600:1900:4001:de6::/128 scope global dynamic noprefixroute
       valid_lft 2752sec preferred_lft 2752sec
    inet6 fe80::4001:aff:fe00:2/64 scope link
       valid_lft forever preferred_lft forever

root@decagon:~# curl http://ipv6.myip.wtf/text
2600:1900:4001:de6::

root@decagon:~# curl http://ipv4.myip.wtf/text
34.70.245.156

Next up is to install dependencies.

apt update && apt -y install jool-tools jool-dkms linux-headers-cloud-amd64

And then load the kernel module.

modprobe jool

Setting up Jool

Now, we’re ready configure Jool, the same package we used in part 1. We’ll need to decide on an appropriate NAT64 prefix and also which IPv4 address to NAT the traffic to.

Since our VM has the IP 2600:1900:4001:de6:: and GCE routes a /96, the NAT64 prefix is simply 2600:1900:4001:de6::/96. Note that the NAT64 prefix overlaps with the VM’s own IP. This will create a minor complication later but can be worked around. If your provider assigns a larger block, such as a /64, I recommend choosing a non-overlapping /96 for simplicity.

The IPv4 address for NATing is simply the address of the VM, in this case 10.0.0.2. In order to share that address between the VM’s own usage and the NAT64 instance, we’ll keep everything in the default network namespace and use iptables rules to direct which traffic goes where.

Add a Jool instance for the NAT64 prefix. The name is arbitrary, but I’ll call mine plat to follow 464XLAT terminology.

jool instance add plat --iptables --pool6 2600:1900:4001:de6::/96

Tell the instance the IPv4 address and port range to translate traffic to. The port range should be non-overlapping with the VM’s own traffic. This means avoiding the ephemeral port range, which you can check with a sysctl command.

root@decagon:~# sysctl net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 32768    60999

If you’re running any services, also avoid conflicting with those. Most of mine are on ports below 10000, and I like to avoid port 11211 because it’s blocked at some providers due to a history of amplification attacks, so I chose 11212-32767 for the NAT64 range.

The pool needs to be set up independently for TCP, UDP, and ICMP.

jool --instance plat pool4 add -t 10.0.0.2 11212-32767
jool --instance plat pool4 add -u 10.0.0.2 11212-32767
jool --instance plat pool4 add -i 10.0.0.2 11212-32767

As a side note, I’m unclear why ICMP needs a port range. The command doesn’t work without it though. ¯\_(ツ)_/¯

Directing traffic with iptables

IPv6 traffic towards the NAT64 prefix needs to be sent to Jool. If any of the VM’s own addresses overlap with that range, those should be excluded first. In my case the VM occupies the zero address within its /96, and by exluding it the NAT64 will be unable to translate traffic towards 0.0.0.0. That’s ok because the zero address in IPv4 is reserved and will never appear on the wire… but again, if you can avoid an overlap, try to do so for simplicity.

ip6tables -t mangle -A PREROUTING -d 2600:1900:4001:de6:: -j ACCEPT
ip6tables -t mangle -A PREROUTING -d 2600:1900:4001:de6::/96 -j JOOL --instance plat

Make sure to limit which IPv6 source addresses can send traffic to your NAT64 (unless you’re intentionally running a public one, and are prepared to handle the abuse it gets). This could be done with a -s flag on the second rule. I’m already enforcing a restriction in the VPC firewall rules so don’t need to repeat it here.

The return IPv4 traffic also needs to reach Jool. You could probably get away with a single blanket rule here because Jool will return untranslatable packets to the kernel, but since we have a cleanly partitioned port range, specifying it should make things a bit more efficient.

iptables -t mangle -A PREROUTING -d 10.0.0.2 -p tcp --dport 11212:32767 -j JOOL --instance plat
iptables -t mangle -A PREROUTING -d 10.0.0.2 -p udp --dport 11212:32767 -j JOOL --instance plat
iptables -t mangle -A PREROUTING -d 10.0.0.2 -p icmp -j JOOL --instance plat

Verification

Let’s go back to the client from part 1, but reconfigure it with our newly set up NAT64 prefix instead of a public one. After doing so, test it:

root@hexagon:~# curl http://ipv4.myip.wtf/text
34.70.245.156

It works! The address shown is the external address of our NAT64 gateway/PLAT. Potentially a large number of IPv6-only VMs, at the same or different cloud providers, could all share this IPv4 address.