论文标题

沙丘长基线中微子振荡实验的计算

Computing for the DUNE Long-Baseline Neutrino Oscillation Experiment

论文作者

Schellman, Heidi

论文摘要

这是2019年11月在澳大利亚南澳大利亚州阿德莱德的高能物理学的计算机上发表的演讲。部分目的是解释计算专家的沙丘计算的背景。沙丘合作由来自33个国家 /地区的180多家机构组成。该实验是在准备2025 - 2028年期间的第一个10kt基金量液体氩TPC的情况下进行准备,并进行了长时间的数据运行,其中预期为2029年及以后的4个模块。在2018年底,在CERN的Neutrino Platform上进行了700T,15,360个单相读数的通道原型,已经制定了一个主动原型制定程序,并测试了计划于2019年中期的类似尺寸的双相检测器的测试。 2018年测试梁运行是我​​们计算模型的宝贵现场测试。检测器以高达〜2GB/s的速度产生原始数据。这些数据以CERN和Fermilab的磁带的全部速率存储,并在英国和捷克共和国的地点复制。在六周的测试梁运行中,总共生产并重建了来自梁和宇宙触发器的原始数据1.2 pb。从2020年代后期开始的全沙丘检测器数据的基线预测是每年30-60 pb的原始数据。与传统的HEP计算问题相反,Dune的液体氩TPC数据由简单但非常大的(许多GB)的2D数据对象组成,这些数据对象与天体物理图像共享许多特征。这为使用机器学习和模式识别的进步提供了机会,作为能够大量并行处理的高性能计算设施的前沿使用者。

This is a talk given at Computers in High Energy Physics in Adelaide, South Australia, Australia in November 2019. It is partially intended to explain the context of DUNE Computing for computing specialists. The DUNE collaboration consists of over 180 institutions from 33 countries. The experiment is in preparation now with commissioning of the first 10kT fiducial volume Liquid Argon TPC expected over the period 2025-2028 and a long data taking run with 4 modules expected from 2029 and beyond. An active prototyping program is already in place with a short test beam run with a 700T, 15,360 channel prototype of single-phase readout at the neutrino platform at CERN in late 2018 and tests of a similar sized dual-phase detector scheduled for mid-2019. The 2018 test beam run was a valuable live test of our computing model. The detector produced raw data at rates of up to ~2GB/s. These data were stored at full rate on tape at CERN and Fermilab and replicated at sites in the UK and Czech Republic. In total 1.2 PB of raw data from beam and cosmic triggers were produced and reconstructed during the six week test beam run. Baseline predictions for the full DUNE detector data, starting in the late 2020's are 30-60 PB of raw data per year. In contrast to traditional HEP computational problems, DUNE's Liquid Argon TPC data consist of simple but very large (many GB) 2D data objects which share many characteristics with astrophysical images. This presents opportunities to use advances in machine learning and pattern recognition as a frontier user of High Performance Computing facilities capable of massively parallel processing.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源