4 February 2025
The main bottleneck in Tor relays is often the processor’s clock speed. Since Tor does not support multithreading, the best solution is usually to run a separate Tor instance for each core (or thread). Given the cost of electricity, we are looking for hardware that is both power-efficient and cost-effective while still being capable of saturating the available bandwidth. The apu2, although no longer in production, remains an excellent machine, used for years and with stable support for coreboot. The model we are testing has the following specifications:
We want to determine the configuration that best optimizes resource usage, considering that we have access to a full IPv4 subnet and 2.5 Gbit of upload bandwidth.
A set of scripts is available to efficiently manage multiple Tor processes on the same system.
First, the default Tor service needs to be disabled:
systemctl disable tor
systemctl stop tor
Then, let’s create the desired instance:
tor-instance-create <name of the instance>
All configurations are located in /etc/tor/instances
and can be managed as separate system services.
systemctl start tor@<name of the instance>
systemctl enable tor@<name of the instance>
We tested with 4 processes (one per core), but due to the limited available RAM, individual instances did not exceed 4 MB/s. The minimum memory required for each instance seems to be around 400-500 MB. We therefore changed the configuration and applied some optimizations:
/etc/resolve.conf
This allowed to reduce the base system’s memory usage and leave approximately 1.6 GB of free RAM for the Tor instances.
We assigned 3 IPv4 addresses and 3 IPv6 on the same network interface because we have our dedicated subnets, but often the same IP is used with different ports (however, there is a limit imposed by the directory authorities on how many nodes can share the same IPv4, currently 8).
We thus have the following network interface:
5: enp2s0.835@enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:0d:b9:4a:bf:71 brd ff:ff:ff:ff:ff:ff
inet 64.190.76.2/24 brd 64.190.76.255 scope global enp2s0.835
valid_lft forever preferred_lft forever
inet 64.190.76.3/24 brd 64.190.76.255 scope global secondary enp2s0.835
valid_lft forever preferred_lft forever
inet 64.190.76.4/24 brd 64.190.76.255 scope global secondary enp2s0.835
valid_lft forever preferred_lft forever
inet6 2001:67c:e28:1::4/64 scope global
valid_lft forever preferred_lft forever
inet6 2001:67c:e28:1::3/64 scope global
valid_lft forever preferred_lft forever
inet6 2001:67c:e28:1::2/64 scope global
valid_lft forever preferred_lft forever
However, after a few weeks, we were contacted by the Network Health Team because they detected that, despite having different IPv4 addresses, all our relays were exiting through the same one.
Note: It is crucial to provide accurate contact information when operating Tor nodes for this very reason. This is not the first time we have received warnings or suggestions on how to improve our setup.
To change the source IP of a Linux process, there are several solutions, including:
iptables
rulessystemd
iptables
We chose the third option with shorewall for convenience. The configuration is located in /etc/shorewall
.
The files used to map processes to specific outgoing IP addresses are:
/etc/shorewall/mangle
: marks outgoing packets based on a UID/etc/shorewall/snat
: applies source NAT to marked packets using the selected IP addressesHere is an example configuration where packets from the UID tor-bludicapra
are marked (2
), and source NAT is applied on the enp2s0.835
interface with IP 64.190.76.2
.
/etc/shorewall/mangle
#ACTION SOURCE DEST PROTO DPORT SPORT USER
MARK(2) $FW 0.0.0.0/0 - - - _tor-bludicapra
/etc/shorewall/snat
#ACTION SOURCE DEST PROTO DPORT SPORT IPSEC MARK
SNAT(64.190.76.2) - enp2s0.835 - - - - 2
The prerequisite is that the outgoing addresses are configured on the WAN interface (e.g., via /etc/network/interfaces
).
Other relevant shorewall files, which do not require modifications for this configuration, are listed below in the order of filter processing:
/etc/shorewall/interfaces
: for configuring the filtered interfaces/etc/shorewall/rules
: for firewall rules/etc/shorewall/policy
: for global policiesAfter each modification, you can verify the configuration and apply it:
shorewall check
shorewall reload
From this experience, a small family of Italian cheeses was born: