Accurate connectivity data is the foundation for investments in broadband infrastructure. Unfortunately, connectivity data provided to the Federal Communications Commission is often inaccurate and inflated – leaving many rural communities overlooked and disconnected.

NACo has partnered with the Local Initiatives Support Corporation (LISC), the Rural Community Assistance Partnership (RCAP), the National Association of Development Organizations (NADO) and Farm Credit to develop a mobile app designed to identify areas with low or no connectivity to help ensure adequate funding for broadband infrastructure is provided across the country.

“TestIT” is an iOS/Android mobile app that leverages a broadband sampling tool designed by Measurement Lab (MLab) to aggregate broadband speeds across the country from app users. With the press of a single button, users will be able to test their broadband speed from anywhere. Additionally, users will be able to compare their internet speeds to the national average and minimum standards established by the Federal Communications System. No personal information will be collected through this mobile app.

A snapshot of each sample will be sent to a database which will allow NACo and partners to analyze connectivity data across the country. The data collected through this app will help identify areas where broadband service is overstated and underfunded by comparing the data to the National Broadband Map.

Your help identifying gaps in our nation’s broadband coverage is critical to making substantive changes to the process for reporting broadband service. We hope you will help shed light on this critically important issue and encourage your friends, family and constituents to join in the efforts as well!

Why test results might vary from other speed tests?

Internet performance tests may provide different results for a lot of reasons. The TestIT app uses a Network Diagnostics Tool (NDT) designed by Measurement Labs (MLabs) to measure connectivity speeds. Three of the main reasons for different results among tests are listed below:

1. Differences in the location of testing servers

Every performance test has two parts:

client: This is the software that runs on the user’s machine and shows the user their speed results.

server: This is the computer on the Internet to which the client connects to complete the test.

A test generates data between the client and the server, and measures performance between these two points. The location of these two points is important in terms of understanding the results of a given test.

If the server is located within your Internet Service Provider’s (ISP’s) own network (also known as the “last mile”), this is referred to as an “on-net” measurement. This approach lets you know about how your Internet connection is performing intra-network within your ISP, but it does not necessarily reflect the full experience of using the Internet, which almost always involves using inter-network connections (connections between networks) to access content and services that are hosted somewhere outside of your ISP. Results from on-net testing are often higher than those achieved by using other methods, since the “distance” traveled is generally shorter, and the network is entirely controlled by one provider (your ISP).

“Off-net” measurements occur between your computer and a server located outside of your ISP’s network. This means that traffic crosses inter-network borders and often travels longer distances. Off-net testing frequently produces results that are lower than those produced from on-net testing.

M-Lab’s measurements are always conducted off-net. This way, M-Lab is able to measure performance from testers’ computers to locations where popular Internet content is often hosted. By having inter-network connections included in the test, test users get a real sense of the performance they could expect when using the Internet.

2. Differences in testing methods

Different Internet performance tests measure different things in different ways. M-Lab’s NDT test tries to transfer as much data as it can in ten seconds (both up and down), using a single connection to an M-Lab server. Other popular tests try to transfer as much data as possible at once across multiple connections to their server. Neither method is “right” or “wrong,” but using a single stream is more likely to help diagnose problems in the network than multiple streams would. Learn more about M-Lab’s NDT methodology.

All NDT data collected by M-Lab are publicly available in both visualized (graphic), queryable, and raw (unanalyzed) forms.

3. Changing network conditions and distinct test paths

The Internet is always changing, and test results reflect that. A test conducted five minutes ago may show very different results from a test conducted twenty minutes ago. This can be caused by the test traffic being routed differently. For example, one test might travel over a path with broken router, while another may not. A test run today may be directed to a test server located farther away than a test run yesterday. Additionally, IPv4 and IPv6 routes may take different physical paths. Some IPv6 routes may be tunneled through IPv4, from the client, or at any point after the client depending on local network management.

In short, running one test will give you a sense of network conditions at that moment, across the best network path available at that time, to the specific server coordinating the test. But because Internet routing and infrastructure change dynamically, testing regularly and looking at the data over time are much more reliable ways to gauge representative performance.

Related News

US Senate
Advocacy

U.S. Senate releases roadmap on artificial intelligence

On May 15, the U.S. Senate’s Bipartisan AI Working Group released their roadmap on artificial intelligence entitled Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate.

THE_County Countdown_working_image-4.png
Advocacy

County Countdown – May 20, 2024

Every other week, NACo’s County Countdown reviews top federal policy advocacy items with an eye towards counties and the intergovernmental partnership.

Image of GettyImages-1355381242.jpg
Advocacy

Senate Rules Committee considers elections and AI legislation ahead of 2024 General Elections

Senate Rules Committee held business meeting on May 15 to markup and consider legislation on the role of artificial intelligence in elections.

Image of GettyImages-1281545908.jpg
Advocacy

U.S. Senate eyes funding the Affordable Connectivity Program through broader telecommunications package

The ACP is under a short timeline to receive additional funding. After May 31, all 23 million enrolled households will cease to receive any benefit from the program. Several Senate proposals could lead to expedited passage of funding to save the program.

US Capitol side
Advocacy

Congressional leaders introduce new legislation for a national data privacy framework

On April 7, U.S. House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.) and U.S. Senate Commerce, Science and Transportation Committee Chair Maria Cantwell (D-Wash.) introduced the American Privacy Rights Act. 

1466091682
Advocacy

DOJ issues final rule for state and local governments to implement web-based accessibility standards

On April 8, the U.S. Department of Justice (DOJ) announced the release of a web accessibility final rule for state and local governments. 

Upcoming Events

Woman looking at her credit card
Webinar

NACo Cyberattack Simulation: Financial Access

Financial transaction systems are prime targets for cyber threats due to the value and sensitivity of the data. This cyber simulation assesses risks and fortifies safeguards within financial access systems.

lines of data with a lock
Webinar

NACo Cyberattack Simulation: Cloud Security

Cloud computing has become integral to modern business and government operations, offering scalability, flexibility, and efficiency.