|
September 13, 2019
IceCube Institutional Memorandum Of Understanding (MOU)
Michigan State University
Tyce DeYoung
Ph.D Scientists (Faculty Scientist/Post Doc Grads): 8 (5 3 9)
Scope of Work
Maintenance and Operations
|
|||||||||||||
Labor Cat.
|
Names
|
WBS L3
Tasks
|
Funds Source
WBS 2.1
WBS 2.2
WBS 2.3
WBS 2.4
WBS 2.5
WBS 2.6
Grand Total
|
| |||||||||
Program Coordination
|
Detector Maintenance & Operations
|
Computing & Data Management
|
Data Processing & Simulation
|
Software
|
Calibration
|
||||||||
KE
|
DeYOUNG, TYCE
|
| |||||||||||
| |||||||||||||
DeYOUNG, TYCE Total
|
|
|
0.10
|
|
|
|
|
0.10
|
| ||||
GRANT, DARREN
|
| ||||||||||||
| |||||||||||||
| |||||||||||||
|
0.75
|
0.75
|
|||||||||||
KOPPER, CLAUDIO
|
| ||||||||||||
| |||||||||||||
| |||||||||||||
| |||||||||||||
| |||||||||||||
| |||||||||||||
|
0.10
|
0.25
|
0.15
|
0.15
|
0.65
|
||||||||
TOLLEFSON, KIRSTEN
|
|||||||||||||
|
0.00
|
||||||||||||
MAHN, KENDALL
|
| ||||||||||||
MAHN, KENDALL Total
|
|
|
|
|
|
|
0.05
|
0.05
|
| ||||
PO
|
NISA, MEHR
|
| |||||||||||
| |||||||||||||
|
0.03
|
0.10
|
0.13
|
||||||||||
CLARK, BRIAN
|
| ||||||||||||
|
0.10
|
0.10
|
|||||||||||
HALLIDAY, ROBERT
|
| ||||||||||||
HALLIDAY, ROBERT Total
|
|
|
|
|
0.25
|
|
|
0.25
|
| ||||
GR
|
NEER, GARRETT
|
| |||||||||||
|
0.10
|
0.10
|
|||||||||||
NOWICKI, SARAH
|
|||||||||||||
NOWICKI, SARAH Total
|
|
|
|
|
|
|
0.00
|
| |||||
RYSEWYCK CANTU, DEVYN
|
| ||||||||||||
RYSEWYCK CANTU, DEVYN Total
|
0.05
|
0.05
|
|||||||||||
SANCHEZ HERRERA, SEBASTIAN
|
|||||||||||||
|
0.00
|
||||||||||||
MICALLEF, JESSIE
|
Education & Outreach
IceCube Masterclass
|
|
0.05
|
|
|
|
|
0.05
|
| ||||
Core Software
Software strike team / CLSim development and maintenance
|
|
|
|
|
|
0.20
|
0.20
|
| |||||
MICALLEF, JESSIE Total
|
0.05
|
|
|
0.20
|
0.25
|
| |||||||
PEISKER, ALISON
|
|||||||||||||
PEISKER, ALISON Total
|
|||||||||||||
HARNISCH, ALEXANDER
|
|||||||||||||
HARNISCH, ALEXANDER Total
|
|||||||||||||
TWAGIRAYEZU, JEAN PIERRE
|
|||||||||||||
TWAGIRAYEZU, JEAN PIERRE
|
|||||||||||||
LE, HIEU
Detector calibration
|
In-situ DOM sensitivity / angular response calibration from muon neutrinos
Inst. In-Kind
|
|
|
|
|
0.20
0.20
|
| ||||||
LE, HIEU
|
0.20
|
0.20
|
|||||||||||
MSU Total
|
1.05
|
0.28
|
0.35
|
0.15
|
0.40
|
0.40
|
2.63
|
IceCube Upgrade
|
||||||||||||
Labor Cat.
|
Names
|
WBS L3
|
Tasks
|
Funds Source
|
WBS 1.1
|
WBS 1.2
|
WBS 1.3
|
WBS 1.4
|
WBS 1.5
|
WBS 1.6
|
Grand Total
|
|
Project Management
|
Drilling
|
Sensors
|
Comms, Power, Timing
|
Calibration
|
Data Systems
|
|||||||
KE
|
DeYOUNG, TYCE
|
|
Management
|
NSF / Inst. In-Kind
|
0.50
|
0.50
|
||||||
DeYOUNG, TYCE Total
|
|
|
|
|
0.50
|
|
0.50
|
| ||||
GRANT, DARREN
|
|
MSU mDOM production facility
|
Inst. In-Kind
|
0.25
|
0.25
|
|||||||
GRANT, DARREN Total
|
|
|
|
0.25
|
|
|
0.25
|
| ||||
EN
|
NG, CHRISTOPHER
|
|
Penetrator Cable Assemblies, Main cable mechanical/ string hardware
|
NSF
|
0.75
|
0.75
|
||||||
|
Design, set up and maintain NTS
|
NSF
|
0.25
|
0.25
|
||||||||
|
1.00
|
1.00
|
||||||||||
FERGUSON, BRIAN
|
|
Main cable electrical, procurement support
|
NSF
|
0.25
|
0.25
|
|||||||
FERGUSON, BRIAN Total
|
|
|
|
|
0.25
|
|
0.25
|
| ||||
PO
|
HALLIDAY, ROBERT
|
|
Engineering support
|
Inst. In-kind
|
0.25
|
0.25
|
||||||
HALLIDAY, ROBERT Total
|
0.25
|
0.25
|
||||||||||
MSU Total
|
0.25
|
2.00
|
2.25
|
Faculty:
Tyce DeYoung (IL) – ExecCom, outreach, Upgrade L2 for CPT, 100% IceCube
Darren Grant – Spokesperson, outreach, Upgrade DOM production 100% IceCube
Claudio Kopper – Diffuse WG co-convener, software development, offline processing support, outreach, 100% IceCube
Kirsten Tollefson – 50% IceCube (50% HAWC)
Kendall Mahn – low energy systematics/GENIE, 5% IceCube (95% DUNE, T2K)
Scientists and Engineers:
Chris Ng – engineering support for Upgrade WBS 1.3, 1.4, 100% IceCube
Brian Ferguson – engineering support for Upgrade WBS 1.4, 25% IceCube
Post Docs:
Mehr Nisa DOM response/simulation calibration using neutrino-induced muons, monitoring shifts, 50% IceCube (50% HAWC)
Thesis/Analysis topics: galaxy cluster search, Galactic sources search
Robert Halliday IceProd and distributed computing support, Upgrade cable design/engineering support, NTS timing support, 100% IceCube
Thesis/Analysis topics: multi-flavor source searches
Brian Clark (funded by NSF AAPF) IceProd and distributed computing support, 75% IceCube (25% ARA)
Thesis/Analysis topics: diffuse spectrum measurement, EHE neutrino searches
Ph.D. Students:
Garrett Neer DOM response/simulation calibration using neutrino-induced muons, 100% IceCube
Thesis/Analysis topics: dark matter search using LE contained events
Sarah Nowicki 100% IceCube
Thesis/Analysis topics: DirectReco event reconstruction, atmospheric muon neutrino measurements
Devyn Rysewyk Cantu
Thesis/Analysis topics: Extended Galactic source search, IceACT R&D
Sebastian Sanchez Herrera
Thesis/Analysis topics: Anti-neutrino tagging at low energy using Michel electrons
Jessie Micallef (funded by NSF Grad Fellowship) Core Software: Software strike team / CLSim development and maintenance, IceCube Masterclass
Thesis/Analysis topics: Next-generation oscillation analysis
Alison Peisker (funded by NSF Grad Fellowship) 30% IceCube (70% HAWC)
Thesis/Analysis topics: HAWC/IceCube transient searches
Alexander Harnisch 100% IceCube
Thesis/Analysis topics: TBD
Jean Pierre Twagirayezu 100% IceCube
Thesis/Analysis topics: TBD
Hieu Le DOM response/simulation calibration using neutrino-induced muons, 100% IceCube
Thesis/Analysis topics: TBD
Computing Resources:
2019
|
||
CPU Cores | GPU Cards | |
IceCube | 750 | 100 |
The Michigan State IceCube group provides the collaboration access to several large computing clusters maintained and administered by the Michigan State High Performance Computing Cluster in the Institute for Cyber-Enabled Research. The HPCC includes more 21,716 computing cores and 478 GPUs (mix of Tesla V100, K80, and K20). The IceCube group has purchased 750 cores and 8 GPUs in the cluster.
Cluster scheduling policy allows opportunistic use of all compute resources by campus users with job lengths <4 hours. Jobs making use of specialized resources, such as GPUs, have priority over all other jobs queued for those nodes. When appropriate jobs are available from IceProd servers, average (peak) resource allocations of 1000 (3000) CPU cores and 150 (300) GPUs have been obtained.
A new compute environment was rolled out on HPCC in 2018 and we have not yet completed the process of adapting our local scripts to the new environment. The pledge represents a conservative estimate of resource allocation available to IceProd jobs of less than 4 hours duration once that process is complete.
Page 5 of 7