Accurate connectivity data is the foundation for investments in broadband infrastructure. Unfortunately, connectivity data provided to the Federal Communications Commission is often inaccurate and inflated – leaving many rural communities overlooked and disconnected.

NACo has partnered with the Local Initiatives Support Corporation (LISC), the Rural Community Assistance Partnership (RCAP), the National Association of Development Organizations (NADO) and Farm Credit to develop a mobile app designed to identify areas with low or no connectivity to help ensure adequate funding for broadband infrastructure is provided across the country.

“TestIT” is an iOS/Android mobile app that leverages a broadband sampling tool designed by Measurement Lab (MLab) to aggregate broadband speeds across the country from app users. With the press of a single button, users will be able to test their broadband speed from anywhere. Additionally, users will be able to compare their internet speeds to the national average and minimum standards established by the Federal Communications System. No personal information will be collected through this mobile app.

A snapshot of each sample will be sent to a database which will allow NACo and partners to analyze connectivity data across the country. The data collected through this app will help identify areas where broadband service is overstated and underfunded by comparing the data to the National Broadband Map.

Your help identifying gaps in our nation’s broadband coverage is critical to making substantive changes to the process for reporting broadband service. We hope you will help shed light on this critically important issue and encourage your friends, family and constituents to join in the efforts as well!

Why test results might vary from other speed tests?

Internet performance tests may provide different results for a lot of reasons. The TestIT app uses a Network Diagnostics Tool (NDT) designed by Measurement Labs (MLabs) to measure connectivity speeds. Three of the main reasons for different results among tests are listed below:

1. Differences in the location of testing servers

Every performance test has two parts:

client: This is the software that runs on the user’s machine and shows the user their speed results.

server: This is the computer on the Internet to which the client connects to complete the test.

A test generates data between the client and the server, and measures performance between these two points. The location of these two points is important in terms of understanding the results of a given test.

If the server is located within your Internet Service Provider’s (ISP’s) own network (also known as the “last mile”), this is referred to as an “on-net” measurement. This approach lets you know about how your Internet connection is performing intra-network within your ISP, but it does not necessarily reflect the full experience of using the Internet, which almost always involves using inter-network connections (connections between networks) to access content and services that are hosted somewhere outside of your ISP. Results from on-net testing are often higher than those achieved by using other methods, since the “distance” traveled is generally shorter, and the network is entirely controlled by one provider (your ISP).

“Off-net” measurements occur between your computer and a server located outside of your ISP’s network. This means that traffic crosses inter-network borders and often travels longer distances. Off-net testing frequently produces results that are lower than those produced from on-net testing.

M-Lab’s measurements are always conducted off-net. This way, M-Lab is able to measure performance from testers’ computers to locations where popular Internet content is often hosted. By having inter-network connections included in the test, test users get a real sense of the performance they could expect when using the Internet.

2. Differences in testing methods

Different Internet performance tests measure different things in different ways. M-Lab’s NDT test tries to transfer as much data as it can in ten seconds (both up and down), using a single connection to an M-Lab server. Other popular tests try to transfer as much data as possible at once across multiple connections to their server. Neither method is “right” or “wrong,” but using a single stream is more likely to help diagnose problems in the network than multiple streams would. Learn more about M-Lab’s NDT methodology.

All NDT data collected by M-Lab are publicly available in both visualized (graphic), queryable, and raw (unanalyzed) forms.

3. Changing network conditions and distinct test paths

The Internet is always changing, and test results reflect that. A test conducted five minutes ago may show very different results from a test conducted twenty minutes ago. This can be caused by the test traffic being routed differently. For example, one test might travel over a path with broken router, while another may not. A test run today may be directed to a test server located farther away than a test run yesterday. Additionally, IPv4 and IPv6 routes may take different physical paths. Some IPv6 routes may be tunneled through IPv4, from the client, or at any point after the client depending on local network management.

In short, running one test will give you a sense of network conditions at that moment, across the best network path available at that time, to the specific server coordinating the test. But because Internet routing and infrastructure change dynamically, testing regularly and looking at the data over time are much more reliable ways to gauge representative performance.

Related News

Artificial intelligence
County News

Podcast: Artificial intelligence and county government

Shinica Thomas, a Wake County, N.C. commissioner and a member of NACo's AI Exploratory Committee, talks about her process for learning about artificial intelligence, its limits and challenges and some applications for use in county government.

AI policy
County News

Congress makes incremental progress on AI policy

On Nov. 19, the U.S. Senate Committee on Commerce, Science, & Transportation convened its last major hearing pertaining to artificial intelligence of the year — in this case, on the topic on Protecting Consumers from Artificial Intelligence (AI) Enabled Fraud and Scams.

ai-innovations
County News

County governments debut AI innovations

As counties like Madison County N.Y. embrace artificial intelligence, they pave the way for smarter governance and better public services.

alameda
County News

Follow Alameda County’s AI journey

Alameda County, Calif. gathered 120 employees from 19 departments to brainstorm creative solutions and explore what was possible with artificial intelligence.

THE_County Countdown_working_image-4.png
Advocacy

County Countdown – December 16, 2024

Every other week, NACo's County Countdown reviews top federal policy advocacy items with an eye towards counties and the intergovernmental partnership.

AI Capitol Hill
Advocacy

NACo Publishes 2025 AI Policy Priorities Primer

On December 4, NACo published its Key AI Policy Priorities for Counties for the 119th Congress, building on the policy platform proposals voted on and passed at NACo’s 2024 Annual Conference.

Upcoming Events

A blue paper boat leads ahead of rows of white paper boats
Webinar

Chart Your Course: Learning Resources for the County Workforce

Enhance your skills and knowledge in AI, cybersecurity, and more with this exciting new learning program. 

This webinar will:

  • Introduce the new "Chart Your Course" program offering valuable training for county employees.
  • Explore key courses in general knowledge, IT skills, and business analytics.
  • Discuss how to maximize your learning journey with tailored training paths.

Register now and embark on your path to professional development!