Canadian Astronomical Computing, Data And Network Facilities- A White Paper for the 2010 Long Range Plan

Posted by Jonathan Dursi on May 01, 2010 · 2 mins read

This is a crosspost from Jonathan Dursi, R&D computing at scale. See the original post here.

In this whitepaper for the CASCA 2010 Long Range Plan, I and the rest of the Computing, Data, and Network committee of CASCA lay out the state of ecosystem for computation in support of Canadian astronomy, and suggests a path forward for the time period of the 2010-2020 long range plan.

Abstract

Significant investment in new large, expensive astronomical observing facilities spanning a substantial portion of the electronic spectrum was a dominant theme of LRP2000 and continues to be necessary for Canadian astronomy to maintain its world position. These developments are generating increasingly large volumes of data. Such investments only makes sense if they are balanced by strong infrastructure support to ensure that data acquired with these facilities can be readily accessed and analyzed by observers, and that theoreticians have the tools available to simulate and understand their context. This will require continuing investment in computational facilities to store and analyze the data, networks to ensure useful access to the data and products by Canadian researchers, and personnel to help Canadian researchers make use of these tools.

In addition, large parallel simulations have become an essential tool for astrophysical theory, and Canadian Astronomy has world-leading simulators and developers who rely on world-class High Performance Computing facilities being maintained in Canada to do their research effectively.

We recommend that Compute Canada be funded at $72M/yr to bring HPC funding per capita in line with G8 norms; that part of every Compute Canada technology renewal include a Top-20 class computing facility; NSERC and other funding agencies begin supporting software development as an integral component of scientific research; that the staff funding for consortia be tripled, including local access to technical analyst staff; and that the last mile bottleneck of campus networking less than 10 Gb/s be addressed where it is impacting researchers, with particular urgency for the current 1 Gb/s connection at the CADC.