ESnet maintains throughput and latency test hosts at ESnet points of presence (PoPs) as well as test hosts connected to the ESnet routers at many Department of Energy facilities. The primary perfSONAR services ESnet provides are throughput testing (via iperf3) and delay/loss testing (via OWAMP). Additional information can be found here: http://fasterdata.es.net/performance-testing/perfsonar/esnet-perfsonar-services/
ESnet utilizes a “combined” host that runs loss, latency, and traceroute tests on a 1G NIC, and throughput tests on the 10G NIC. These hosts have the following configuration:
ESnet OWAMP testers at the ESnet PoPs and sites provide the ability to measure one-way delay and packet loss on the ESnet network, and between other OWAMP test hosts and ESnet test hosts. ESnet maintains a set of logically-grouped OWAMP test results on the ESnet perfSONAR dashboard.
The list of current ESnet OWAMP test hosts can be found in one of two ways:
ESnet throughput testers at the ESnet PoPs and sites provide the ability to measure throughput between locations on the ESnet network, and between other throughput test hosts and ESnet test hosts. ESnet maintains a set of logically-grouped throughput test results on the ESnet perfSONAR dashboard.
The list of current ESnet throughput test hosts can be found in one of two ways:
ESnet permits tests to ESnet throughput testers from any ESnet site, and from any scientific or research institution that is connected to the global research and education network infrastructure. This includes US laboratories and universities, as well as research laboratories and universities in Africa, Asia, Australia, Europe, and Latin America. This is accomplished by including the global R&E routing table (the set of IP prefixes accessible via peerings with R&E networks) via the Limiting Tests to R&E Networks Only mechanism. Other logical groups of addresses such as Amazon Cloud services are added to the file from time to time as the needs of the scientific community evolve. This page has more details.
If your site uses router ACLs, the ESnet subnet listing can be found here: http://fasterdata.es.net/performance-testing/perfsonar/esnet-perfsonar-services/esnet-subnet-filters/
Internet2 maintains a set of performance test nodes for performance assurance on the AL2S and AL3S networks.
The following is the hardware configuration for Internet2 perfSONAR nodes:
The following describes the operating software of the Internet2 perfSONAR nodes:
Measurement configurations to build and maintain the testing mesh are created by querying the OESS service, re-generating necessary files, and pushing them to resources via the management framework.
End hosts are configured to pass traffic on existing AL2S VLANs as managed by OESS, and do not change dynamically. There is no QoS policy on these links, meaning bandwidth is not managed or shaped.
The network expectations are that the test hosts will also achieve at least 9Gbps of UDP traffic for the 100Gbps capacity links. Alarms are configured to alert when this drops below a threshold of 8Gbps, indicating a network problem such as gradual performance degradation or congestion.
UDP bandwidth measurements are used in an effort to decouple end host effects from network performance – the goal of the PAS is to ensure the network is delivering desired performance levels. Hosts were specifically tuned and tested to ensure maximum UDP performance could be achieved. Future directions of the PAS may include end-host focused testing, including TCP throughput.
The Internet2 dashboard can be found at the following location: https://pas.net.internet2.edu
Additional documentation on Internet2’s Performance Assurance Service can be found here http://www.internet2.edu/products-services/performance-analytics/performance-assurance-service/
The NSRC group routinely deploys perfSONAR hardware in emerging networks around the world and has tested two configurations. The first retails for ~$700 USD and was assembled at ServersDirect. This setup is for a 1G host, but can support a 10G daughter card:
The second retails for ~$1100 and comes with a 10G card:
The following information was researched in November of 2015 and relates to the Intel NUC DN2820FYKH. Note that changing hardware specifications may make this information obsolete, it is provided as a potential deployment scenario only. Cost was approximately $150 USD at the time of specification for a bare machine. The purchase of a 2.5” hard drive and DDR3L SO-DIMM RAM (suggested 4GB - 8GB) will also be required for full functionality, and will cost extra. These boxes have a dual-core processor at 2.17GHz (spec says DN2820, but /proc/cpuinfo says DN2830), and they claim to support VT-x virtualization. Similarly, the Gigabyte Brix GB-BXBT-280 is comparable: Celeron N2807, 1.58GHz.
Performance-wise, these have been shown to reach near gigabit speeds: using a direct connection between two NUCs, iperf3 gave 942Mbps (which is the theoretical maximum, once you take into account IP and ethernet headers). At the sending side, top shows about 38% CPU used by iperf3, and 76% idle. At the receiving side, this falls to 26% for iperf3 and 90% idle.
GÉANT enabled users to have first-hand experience on running a perfSONAR node on a small PC and introduced perfSONAR on low cost hardware to build such perfSONAR network measurement platform in Europe. It was completed by distributing pre-configured perfSONAR nodes, running on small devices and making them a part of a GÉANT maintained measurement mesh. The platform was BRIX BACE-3150 devices with 1Gb Eth interface, 120 GB SSD drives and 8 GB RAM costing about 200 Euros each. These boxes have Intel Celeron 1.6GHz with 4 cores.
A master node was cloned with Clonezilla to all other mini-PCs which were distributed among users
This setup was centrally managed with a central server provided by the project
Measurements were scheduled from the small nodes to a set of 4 GÉANT MPs (for latency and throughput, for IPv4 and IPv6)
Users were able to configure their own tests
The central server was running a MaDDash instance
MRTG monitoring of the central server was configured to follow on resources usage
The following information was researched and tested in October of 2016 to test 100G perfSONAR. Testing revealed that CentOS 7 with fair queing performed the best: