Menu

End-to-End Network Tuning Sends Data Screaming from NERSC to NOAA

a) 24 hour observed precipitation amounts for 9 January 1995; (b) Average 1-day precipitation forecasts; (c) Today's forecast calibrated with old reforecasts and precipitation analyses. (Click image to enlarge.) Image coutesty of NOAA Earth System Research Laboratory.

September 21, 2012

Jon Bashor, Jbashor@lbl.gov, 510-486-5849

When it comes to moving large datasets between DOE’s National Energy Research Scientific Computing Center and his home institution in Boulder, Colo., Gary Bates is no slouch. As an associate scientist in the Earth System Research Lab of the National Oceanic and Atmospheric Administration (NOAA), Bates has transferred hundreds of thousands of files to and from NERSC, as part of a “reforecasting” weather forecasting project.

The “reforecasting” project, led by NOAA’s Tom Hamill, involves running several decades of historical weather forecasts with the same (2012) version of NOAA’s Global Ensemble Forecast System (GEFS). Among the advantages associated with a long reforecast dataset is that model forecast errors can be diagnosed from the past forecasts and corrected, thereby dramatically increasing the forecast skill, especially in forecasts of relatively rare events and longer-lead forecasts.

The GEFS weather forecast model used in this project is the same as that which is currently run in real time by the National Weather Service. In the reforecast project, GEFS forecasts were made on a daily basis from 1984 through early 2012, out to a forecast lead of 16 days. To further improve forecasting skills, an ensemble of 11 realizations was run each day, differing only slightly in their initial conditions.

In 2010, the NOAA team received an ALCC allocation of 14.5 million processor hours on NERSC supercomputers to perform this work. In all, the 1984-2012 historical GEFS dataset now totals over 800 terabytes, stored on the NERSC HPSS archival system. Of the 800 terabytes at NERSC, the NOAA team sought to bring about 170 terabytes back to NOAA Boulder for further processing and to make it more readily available to other researchers. Because of the large quantity of data involved, having the data move as quickly and easily as possible is important, both on the network and at both end points in Oakland and Boulder.

Bates was able to bring over the bulk of the 170 terabytes earlier in the year, using a machine that NOAA Boulder’s Network Operations Center staff (BNOC) had temporarily set up as a Globus Online endpoint. Globus Online is a cloud-based tool for high-performance data transfers. However, when the remainder of the data was ready to be moved this summer, that machine was being used for other tasks and was no longer available.

When he tried to use an FTP server located behind NOAA’s firewall for the remaining transfers, Bates discovered that data trickled in at about 1-2 megabytes per second. So Keith Holub of the BNOC staff set up a new, dedicated server with a data path unencumbered by legacy firewalls. The BNOC staff configured the new Globus Online endpoint node using information from ESnet’s fasterdata website, a knowledge base of tips and tricks for speeding up end-to-end data transfers. This kind of configuration is an example of ESnet’s Science DMZ model for high-performance systems supporting data intensive science. The change was instantly noticeable—much faster transfer rates than ever achieved previously were realized.

“Whoa! Transfer from NERSC to the BNOC data transfer node using Globus is screaming!,” Bates wrote to his team and Eli Dart of ESnet. “I transferred 273 files with a total size of 239.5 gigabytes in just over 10 minutes. I calculate that’s a rate of 395 megabytes per second. I've never gotten anything close to that before. Transferring the same 239.5 gigabytes from BNOC data node down to my local data storage is slower but still very good: it took about 81 minutes, or 49 MB/s.”

Dart learned about Bates’ data transfer challenges a few months ago when Bates was trying to move data to NERSC, uploading the files from tape onto an FTP server. Dart points out that Bates’ transfer rate adds up to more than 1 terabyte per hour.

“Now the system really rocks,” Dart said. “NERSC has a well-configured data transfer infrastructure that operates very well. When all the right things are done at the other end, everything runs well – this is the way it’s supposed to work.”

Fine Tuning for Faster Performance

Damian Hazen of NERSC’s Mass Storage Group regularly helps users, including Bates, who are looking to move large datasets to and from NERSC’s data archive as quickly and easily as possible. Because the center deals with a large number of users, NERSC staff are diligent about tuning local transfer nodes for the best performance, Hazen said. The staff also helps users try to track down the problem, which in Bates’ case was the firewall at his institution. By understanding the characteristics of the file systems and the network, NERSC and ESnet staff have a pretty good idea of what the optimal performance should be, then work with users and other staff to get as close to that level as they can.

Another way to improve a user’s overall performance is to help streamline the workflow. For example, it can take 90 seconds or more to locate and mount the archive tape holding the requested data. If several of the requested files are on the same tape, grouping the requests means the files are read consecutively. It seems like a small thing, Hazen said, but it can make a big difference if a lot of files are involved.

Hazen credits Bates with being eager to implement new tools, such as Globus Online. In fact, Globus Online named Bates the “user of the month” in October 2011 for being one of the biggest users in terms of the amount of data transferred. “In fact with his work over the past two weeks, Gary now sits as the top 2 user ever in terms of data moved,” Globus Online noted in announcing the honor.

NERSC staff also helped Bates port and tune his application to speed up its performance on Franklin and Hopper, the center’s Cray supercomputers, and Carver, an IBM system. Helen He in NERSC’s User Services Group specializes in help users who run climate and weather code. In Bates’ case, He rewrote some of the code so it could be ported from NOAA’s IBM computer to the Cray systems at NERSC. In the process, she helped speed up the run time for part of the code from 24 minutes to just a few seconds. She assisted in redesigning post-processing workflow for better throughput by working with the queue structures on Carver. The project also benefited significantly with the queue boost and dedicated compute nodes.

“By helping him improve the workflow of his application, we’ve gotten better throughput and faster runtimes,” He said.

Bates agreed, saying “It’s a very good system to use – the support is there and the people are very helpful.”