Benchmarking the MRNet Distributed Tool Infrastructure: Lessons Learned
| dc.contributor.author | Miller, Barton P. | en_US |
| dc.contributor.author | Roth, Philip | en_US |
| dc.contributor.author | Arnold, Dorian | en_US |
| dc.date.accessioned | 2012-03-15T17:18:07Z | |
| dc.date.available | 2012-03-15T17:18:07Z | |
| dc.date.created | 2004 | en_US |
| dc.date.issued | 2004 | |
| dc.description.abstract | MRNet is an infrastructure that provides scalable multicast and data aggregation functionality for distributed tools. While evaluating MRNet?s performance and scalability, we learned several important lessons about benchmarking large-scale, distributed tools and middleware. First, automation is essential for a successful benchmarking effort, and should be leveraged whenever possible during the benchmarking process. Second, microbenchmarking is invaluable not only for establishing the performance of low-level functionality, but also for design verification and debugging. Third, resource management systems need substantial improvements in their support for running tools and applications together. Finally, the most demanding experiments should be attempted early and often during a benchmarking effort to increase the chances of detecting problems with the tool and experimental methodology. | en_US |
| dc.format.mimetype | application/pdf | en_US |
| dc.identifier.citation | TR1503 | |
| dc.identifier.uri | http://digital.library.wisc.edu/1793/60394 | |
| dc.publisher | University of Wisconsin-Madison Department of Computer Sciences | en_US |
| dc.title | Benchmarking the MRNet Distributed Tool Infrastructure: Lessons Learned | en_US |
| dc.type | Technical Report | en_US |
Files
Original bundle
1 - 1 of 1