The IceCube Collaboration:
Contributions to the 31
st
International Cosmic Ray Conference*
Łό
d
ź
, Poland, 7-15 July 2009
------------------
*
Includes some related papers submitted by individual members of
the Collaboration
This page intentionally left blank
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
IceCube COLLABORATION
R. Abbasi
24
, Y. Abdou
18
, M. Ackermann
36
, J. Adams
13
, J. Aguilar
24
, M. Ahlers
28
, K. Andeen
24
,
J. Auffenberg
35
, X. Bai
27
, M. Baker
24
, S. W. Barwick
20
, R. Bay
7
, J. L. Bazo Alba
36
, K. Beattie
8
,
J. J. Beatty
15,16
, S. Bechet
10
, J. K. Becker
17
, K.-H. Becker
35
, M. L. Benabderrahmane
36
,
J. Berdermann
36
, P. Berghaus
24
, D. Berley
14
, E. Bernardini
36
, D. Bertrand
10
, D. Z. Besson
22
,
M. Bissok
1
, E. Blaufuss
14
, D. J. Boersma
24
, C. Bohm
30
, J. Bolmont
36
, S. Boser¨
36
, O. Botner
33
,
L. Bradley
32
, J. Braun
24
, D. Breder
35
, T. Castermans
26
, D. Chirkin
24
, B. Christy
14
, J. Clem
27
,
S. Cohen
21
, D. F. Cowen
32,31
, M. V. D’Agostino
7
, M. Danninger
30
, C. T. Day
8
, C. De Clercq
11
,
L. Demirors¨
21
, O. Depaepe
11
, F. Descamps
18
, P. Desiati
24
, G. de Vries-Uiterweerd
18
, T. DeYoung
32
,
J. C. Diaz-Velez
24
, J. Dreyer
17
, J. P. Dumm
24
, M. R. Duvoort
34
, W. R. Edwards
8
, R. Ehrlich
14
,
J. Eisch
24
, R. W. Ellsworth
14
, O. Engdegard˚
33
, S. Euler
1
, P. A. Evenson
27
, O. Fadiran
4
,
A. R. Fazely
6
, T. Feusels
18
, K. Filimonov
7
, C. Finley
24
, M. M. Foerster
32
, B. D. Fox
32
,
A. Franckowiak
9
, R. Franke
36
, T. K. Gaisser
27
, J. Gallagher
23
, R. Ganugapati
24
, L. Gerhardt
8,7
,
L. Gladstone
24
, A. Goldschmidt
8
, J. A. Goodman
14
, R. Gozzini
25
, D. Grant
32
, T. Griesel
25
,
A. Groß
13,19
, S. Grullon
24
, R. M. Gunasingha
6
, M. Gurtner
35
, C. Ha
32
, A. Hallgren
33
,
F. Halzen
24
, K. Han
13
, K. Hanson
24
, Y. Hasegawa
12
, J. Heise
34
, K. Helbing
35
, P. Herquet
26
,
S. Hickford
13
, G. C. Hill
24
, K. D. Hoffman
14
, K. Hoshina
24
, D. Hubert
11
, W. Huelsnitz
14
,
J.-P. Hulߨ
1
, P. O. Hulth
30
, K. Hultqvist
30
, S. Hussain
27
, R. L. Imlay
6
, M. Inaba
12
, A. Ishihara
12
,
J. Jacobsen
24
, G. S. Japaridze
4
, H. Johansson
30
, J. M. Joseph
8
, K.-H. Kampert
35
, A. Kappes
24,a
,
T. Karg
35
, A. Karle
24
, J. L. Kelley
24
, P. Kenny
22
, J. Kiryluk
8,7
, F. Kislat
36
, S. R. Klein
8,7
,
S. Klepser
36
, S. Knops
1
, G. Kohnen
26
, H. Kolanoski
9
, L. Kopk¨ e
25
, M. Kowalski
9
, T. Kowarik
25
,
M. Krasberg
24
, K. Kuehn
15
, T. Kuwabara
27
, M. Labare
10
, S. Lafebre
32
, K. Laihem
1
,
H. Landsman
24
, R. Lauer
36
, H. Leich
36
, D. Lennarz
1
, A. Lucke
9
, J. Lundberg
33
, J. Lunemann¨
25
,
J. Madsen
29
, P. Majumdar
36
, R. Maruyama
24
, K. Mase
12
, H. S. Matis
8
, C. P. McParland
8
,
K. Meagher
14
, M. Merck
24
, P. Mesz´ ar´ os
31,32
, E. Middell
36
, N. Milke
17
, H. Miyamoto
12
, A. Mohr
9
,
T. Montaruli
24,b
, R. Morse
24
, S. M. Movit
31
, K. Munich¨
17
, R. Nahnhauer
36
, J. W. Nam
20
,
P. Nießen
27
, D. R. Nygren
8,30
, S. Odrowski
19
, A. Olivas
14
, M. Olivo
33
, M. Ono
12
, S. Panknin
9
,
S. Patton
8
, C. Per´ ez de los Heros
33
, J. Petrovic
10
, A. Piegsa
25
, D. Pieloth
36
, A. C. Pohl
33,c
,
R. Porrata
7
, N. Potthoff
35
, P. B. Price
7
, M. Prikockis
32
, G. T. Przybylski
8
, K. Rawlins
3
, P. Redl
14
,
E. Resconi
19
, W. Rhode
17
, M. Ribordy
21
, A. Rizzo
11
, J. P. Rodrigues
24
, P. Roth
14
, F. Rothmaier
25
,
C. Rott
15
, C. Roucelle
19
, D. Rutledge
32
, D. Ryckbosch
18
, H.-G. Sander
25
, S. Sarkar
28
,
K. Satalecka
36
, S. Schlenstedt
36
, T. Schmidt
14
, D. Schneider
24
, A. Schukraft
1
, O. Schulz
19
,
M. Schunck
1
, D. Seckel
27
, B. Semburg
35
, S. H. Seo
30
, Y. Sestayo
19
, S. Seunarine
13
, A. Silvestri
20
,
A. Slipak
32
, G. M. Spiczak
29
, C. Spiering
36
, M. Stamatikos
15
, T. Stanev
27
, G. Stephens
32
,
T. Stezelberger
8
, R. G. Stokstad
8
, M. C. Stoufer
8
, S. Stoyanov
27
, E. A. Strahler
24
,
T. Straszheim
14
, K.-H. Sulanke
36
, G. W. Sullivan
14
, Q. Swillens
10
, I. Taboada
5
, O. Tarasova
36
,
A. Tepe
35
, S. Ter-Antonyan
6
, C. Terranova
21
, S. Tilav
27
, M. Tluczykont
36
, P. A. Toale
32
, D. Tosi
36
,
D. Turcanˇ
14
, N. van Eijndhoven
34
, J. Vandenbroucke
7
, A. Van Overloop
18
, B. Voigt
36
,
C. Walck
30
, T. Waldenmaier
9
, M. Walter
36
, C. Wendt
24
, S. Westerhoff
24
, N. Whitehorn
24
,
C. H. Wiebusch
1
, A. Wiedemann
17
, G. Wikstrom¨
30
, D. R. Williams
2
, R. Wischnewski
36
,
H. Wissing
1,14
, K. Woschnagg
7
, X. W. Xu
6
, G. Yodh
20
, S. Yoshida
12
1
III Physikalisches Institut, RWTH Aachen University, D-52056 Aachen, Germany
2
Dept. of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487, USA
3
Dept. of Physics and Astronomy, University of Alaska Anchorage, 3211 Providence Dr., Anchorage, AK 99508,
USA
4
CTSPS, Clark-Atlanta University, Atlanta, GA 30314, USA
5
School of Physics and Center for Relativistic Astrophysics, Georgia Institute of Technology, Atlanta, GA 30332.
USA
6
Dept. of Physics, Southern University, Baton Rouge, LA 70813, USA
7
Dept. of Physics, University of California, Berkeley, CA 94720, USA
8
Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
9
Institut fur¨ Physik, Humboldt-Universitat¨ zu Berlin, D-12489 Berlin, Germany
10
Universite´ Libre de Bruxelles, Science Faculty CP230, B-1050 Brussels, Belgium
11
Vrije Universiteit Brussel, Dienst ELEM, B-1050 Brussels, Belgium
12
Dept. of Physics, Chiba University, Chiba 263-8522, Japan
2
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
13
Dept. of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch, New Zealand
14
Dept. of Physics, University of Maryland, College Park, MD 20742, USA
15
Dept. of Physics and Center for Cosmology and Astro-Particle Physics, Ohio State University, Columbus, OH
43210, USA
16
Dept. of Astronomy, Ohio State University, Columbus, OH 43210, USA
17
Dept. of Physics, TU Dortmund University, D-44221 Dortmund, Germany
18
Dept. of Subatomic and Radiation Physics, University of Gent, B-9000 Gent, Belgium
19
Max-Planck-Institut fur¨ Kernphysik, D-69177 Heidelberg, Germany
20
Dept. of Physics and Astronomy, University of California, Irvine, CA 92697, USA
21
Laboratory for High Energy Physics, Ecole´
Polytechnique Fed´ er´ ale, CH-1015 Lausanne, Switzerland
22
Dept. of Physics and Astronomy, University of Kansas, Lawrence, KS 66045, USA
23
Dept. of Astronomy, University of Wisconsin, Madison, WI 53706, USA
24
Dept. of Physics, University of Wisconsin, Madison, WI 53706, USA
25
Institute of Physics, University of Mainz, Staudinger Weg 7, D-55099 Mainz, Germany
26
University of Mons-Hainaut, 7000 Mons, Belgium
27
Bartol Research Institute and Department of Physics and Astronomy, University of Delaware, Newark, DE
19716, USA
28
Dept. of Physics, University of Oxford, 1 Keble Road, Oxford OX1 3NP, UK
29
Dept. of Physics, University of Wisconsin, River Falls, WI 54022, USA
30
Oskar Klein Centre and Dept. of Physics, Stockholm University, SE-10691 Stockholm, Sweden
31
Dept. of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802, USA
32
Dept. of Physics, Pennsylvania State University, University Park, PA 16802, USA
33
Dept. of Physics and Astronomy, Uppsala University, Box 516, S-75120 Uppsala, Sweden
34
Dept. of Physics and Astronomy, Utrecht University/SRON, NL-3584 CC Utrecht, The Netherlands
35
Dept. of Physics, University of Wuppertal, D-42119 Wuppertal, Germany
36
DESY, D-15735 Zeuthen, Germany
a
affiliated with Universitat¨ Erlangen-Nurnber¨
g, Physikalisches Institut, D-91058, Erlangen, Germany
b
on leave of absence from Universita` di Bari and Sezione INFN, Dipartimento di Fisica, I-70126, Bari, Italy
c
affiliated with School of Pure and Applied Natural Sciences, Kalmar University, S-39182 Kalmar, Sweden
Acknowledgments
We acknowledge the support from the following agencies: U.S. National Science Foundation-Office of Polar
Program, U.S. National Science Foundation-Physics Division, University of Wisconsin Alumni Research Foun-
dation, U.S. Department of Energy, and National Energy Research Scientific Computing Center, the Louisiana
Optical Network Initiative (LONI) grid computing resources; Swedish Research Council, Swedish Polar Research
Secretariat, and Knut and Alice Wallenberg Foundation, Sweden; German Ministry for Education and Research
(BMBF), Deutsche Forschungsgemeinschaft (DFG), Germany; Fund for Scientific Research (FNRS-FWO), Flanders
Institute to encourage scientific and technological research in industry (IWT), Belgian Federal Science Policy Office
(Belspo); the Netherlands Organisation for Scientific Research (NWO); M. Ribordy acknowledges the support of
the SNF (Switzerland); A. Kappes and A. Groß acknowledge support by the EU Marie Curie OIF Program;
J. P. Rodrigues acknowledge support by the Capes Foundation, Ministry of Education of Brazil.
1339: IceCube
Albrecht Karle, for the IceCube Collaboration (Highlight paper)
0653: All-Sky Point-Source Search with 40 Strings of IceCube
Jon Dumm, Juan A. Aguilar, Mike Baker, Chad Finley, Teresa Montaruli, for the
IceCube Collaboration
0812: IceCube Time-Dependent Point Source Analysis Using Multiwavelength
Information
M. Baker, J. A. Aguilar, J. Braun, J. Dumm, C. Finley, T. Montaruli, S. Odrowski, E.
Resconi for the IceCube Collaboration
0960: Search for neutrino flares from point sources with IceCube
(0908.4209)
J. L. Bazo Alba, E. Bernardini, R. Lauer, for the IceCube Collaboration
0987: Neutrino triggered high-energy gamma-ray follow-up with IceCube
Robert Franke, Elisa Bernardini for the IceCube collaboration
1173: Moon Shadow Observation by IceCube
D.J. Boersma, L. Gladstone and A. Karle for the IceCube Collaboration
1289: IceCube/AMANDA combined analyses for the search of neutrino sources at
low energies
Cécile Roucelle, Andreas Gross, Sirin Odrowski, Elisa Resconi, Yolanda Sestayo
1127: AMANDA 7-Year Multipole Analysis
(0906.3942)
Anne Schukraft, Jan-Patrick Hülß for the IceCube Collaboration
1418: Measurement of the atmospheric neutrino energy spectrum with IceCube
Dmitry Chirkin for the IceCube collaboration
0785: Atmospheric Neutrino Oscillation Measurements with IceCube
Carsten Rott for the IceCube Collaboration
1565: Direct Measurement of the Atmospheric Muon Energy Spectrum with
IceCube
(0909.0679)
Patrick Berghaus for the IceCube Collaboration
1400: Search for Diffuse High Energy Neutrinos with IceCube
Kotoyo Hoshina for the IceCube collaboration
1311:
A Search For Atmospheric Neutrino-Induced Cascades with IceCube
(0910.0215) Michelangelo D’Agostino for the IceCube Collaboration
0882: First search for extraterrestrial neutrino-induced cascades with IceCube
(0909.0989) Joanna Kiryluk for the IceCube Collaboration
0708: Improved Reconstruction of Cascade-like Events in IceCube
Eike Middell, Joseph McCartin and Michelangelo D’Agostino for the IceCube
Collaboration
1221: Searches for neutrinos from GRBs with the IceCube 22-string detector and
sensitivity estimates for the full detector
A. Kappes, P. Roth, E. Strahler, for the IceCube Collaboration
0515: Search for neutrinos from GRBs with IceCube
K. Meagher, P. Roth, I. Taboada, K. Hoffman, for the IceCube Collaboration
0393: Search for GRB neutrinos via a (stacked) time profile analysis
Martijn Duvoort and Nick van Eijndhoven for the IceCube collaboration
0764: Optical follow-up of high-energy neutrinos detected by IceCube
(0909.0631)
Anna Franckowiak, Carl Akerlof, D. F. Cowen, Marek Kowalski, Ringo Lehmann,
Torsten Schmidt and Fang Yuan for the IceCube Collaboration and for the ROTSE
Collaboration
0505: Results and Prospects of Indirect Searches for Dark Matter with IceCube
Carsten Rott and Gustav Wikström for the IceCube collaboration
1356: Search for the Kaluza-Klein Dark Matter with the AMANDA/IceCube
Detectors
(0906.3969), Matthias Danninger & Kahae Han for the IceCube Collaboration
0834: Searches for WIMP Dark Matter from the Sun with AMANDA
(0906.1615)
James Braun and Daan Hubert for the IceCube Collaboration
0861: The extremely high energy neutrino search with IceCube
Keiichi Mase, Aya Ishihara and Shigeru Yoshida for the IceCube Collaboration
0913: Study of very bright cosmic-ray induced muon bundle signatures measured
by the IceCube detector
Aya Ishihara for the IceCube Collaboration
1198: Search for High Energetic Neutrinos from Supernova Explosions with
AMANDA
(0907.4621)
Dirk Lennarz and Christopher Wiebusch for the IceCube Collaboration
0549: Search for Ultra High Energy Neutrinos with AMANDA
Andrea Silvestri for the IceCube Collaboration
1372: Selection of High Energy Tau Neutrinos in IceCube
Seon-Hee Seo and P. A. Toale for the IceCube Collaboration
0484: Search for quantum gravity with IceCube and high energy atmospheric
neutrinos,
Warren Huelsnitz & John Kelley for the IceCube Collaboration
0970: A First All-Particle Cosmic Ray Energy Spectrum From IceTop
Fabian Kislat, Stefan Klepser, Hermann Kolanoski and Tilo Waldenmaier for the
IceCube Collaboration
0518: Reconstruction of IceCube coincident events and study of composition-
sensitive observables using both the surface and deep detector
Tom Feusels, Jonathan Eisch and Chen Xu for the IceCube Collaboration
0737: Small air showers in IceTop
Bakhtiyar Ruzybayev, Shahid Hussain, Chen Xu and Thomas Gaisser for the IceCube
Collaboration
1429: Cosmic Ray Composition using SPASE-2 and AMANDA-II
K. Andeen and K. Rawlins For the IceCube Collaboration
0519: Study of High pT Muons in IceCube
(0909.0055)
Lisa Gerhardt and Spencer Klein for the IceCube Collaboration
1340: Large Scale Cosmic Rays Anisotropy With IceCube
(0907.0498)
Rasha U Abbasi, Paolo Desiati and Juan Carlos Velez for the IceCube Collaboration
1398: Atmospheric Variations as observed by IceCube
Serap Tilav, Paolo Desiati, Takao Kuwabara, Dominick Rocco,
Florian Rothmaier, Matt Simmons, Henrike Wissing for the IceCube Collaboration
1251: Supernova Search with the AMANDA / IceCube Detectors
(0908.0441)
Thomas Kowarik, Timo Griesel, Alexander Piégsa for the IceCube Collaboration
1352: Physics Capabilities of the IceCube DeepCore Detector
(0907.2263)
Christopher Wiebusch for the IceCube Collaboration
1336: Fundamental Neutrino Measurements with IceCube DeepCore
Darren Grant, D. Jason Koskinen, and Carsten Rott for the IceCube collaboration
1237: Implementation of an active veto against atmospheric muons in IceCube
DeepCore
Olaf Schulz, Sebastian Euler and Darren Grant for the IceCube Collaboration
1293: Acoustic detection of high energy neutrinos in ice: Status and
results from the South Pole Acoustic Test Setup
(0908.3251 – revised)
Freija Descamps for the IceCube Collaboration
0903: Sensor development and calibration for acoustic neutrino detection in ice
(0907.3561)
Timo Karg, Martin Bissok, Karim Laihem, Benjamin Semburg, and Delia Tosi
for the IceCube collaboration
PAPERS RELATED TO ICECUBE
0466: A new method for identifying neutrino events in IceCube data
Dmitry Chirkin
0395: Muon Production of Hadronic Particle Showers in Ice and Water
Sebastian Panknin, Julien Bolmont, Marek Kowalski and Stephan Zimmer
0642: Muon bundle energy loss in deep underground detector
Xinhua Bai, Dmitry Chirkin, Thomas Gaisser, Todor Stanev and David Seckel
0542: Constraints on Neutrino Interactions at energies beyond 100 PeV with
Neutrino Telescopes
Shigeru Yoshida
0006: Constraints on Extragalactic Point Source Flux from Diffuse Neutrino Limits
Andrea Silvestri and Steven W. Barwick
0418: Study of electromagnetic backgrounds in the 25-300 MHz frequency band at
the South Pole
Jan Auffenberg, Dave Besson
y
, Tom Gaisser, Klaus Helbing, Timo Karg, Albrecht
Karle,and Ilya Kravchenko
PROCEEDINGS OF 31
st
ICRC, ŁÓDŹ 2009
1
IceCube
Albrecht Karle
*
, for the IceCube Collaboration
*
University of Wisconsin-Madison, 1150 University Avenue, Madison, WI 53706
Abstract
. IceCube is a 1 km
3
neutrino telescope
currently under construction at the South Pole.
The detector will consist of 5160 optical sensors
deployed at depths between 1450 m and 2450 m in
clear Antarctic ice evenly distributed over 86
strings. An air shower array covering a surface
area of 1 km
2
above the in-ice detector will meas-
ure cosmic ray air showers in the energy range
from 300 TeV to above 1 EeV. The detector is de-
signed to detect neutrinos of all flavors: ν
e
, ν
μ
and
ν
τ
. With 59 strings currently in operation, con-
struction is 67% complete. Based on data taken
to date, the observatory meets its design goals.
Selected results will be presented.
Keywords:
neutrinos, cosmic rays, neutrino as-
tronomy.
I. INTRODUCTION
IceCube is a large kilometer scale neutrino tele-
scope currently under construction at the South Pole.
With the ability to detect neutrinos of all flavors over
a wide energy range from about 100 GeV to beyond
10
9
GeV, IceCube is able to address fundamental
questions in both high energy astrophysics and neu-
trino physics. One of its main goals is the search for
sources of high energy astrophysical neutrinos which
provide important clues for understanding the origin
of high energy cosmic rays.
The interactions of ultra high energy cosmic rays
with radiation fields or matter either at the source or
in intergalactic space result in a neutrino flux due to
the decays of the produced secondary particles such
as pions, kaons and muons. The observed cosmic
ray flux sets the scale for the neutrino flux and leads
to the prediction of event rates requiring kilometer
scale detectors, see for example
1
. As primary candi-
dates for cosmic ray accelerators, AGNs and GRBs
are thus also the most promising astrophysical point
source candidates of high energy neutrinos. Galactic
source candidates include supernova remnants, mi-
croquasars, and pulsars. Guaranteed sources of neu-
trinos are the cosmogenic high energy neutrino flux
from interactions of cosmic rays with the cosmic mi-
crowave background and the galactic neutrino flux
resulting from galactic cosmic rays interacting with
the interstellar medium. Both fluxes are small and
their measurement constitutes a great challenge.
Other sources of neutrino radiation include dark mat-
ter, in the form of supersymmetric or more exotic
particles and remnants from various phase transitions
in the early universe.
The relation between the cosmic ray flux and the
atmospheric neutrino flux is well understood and is
based on the standard model of particle physics. The
observed diffuse neutrino flux in underground labo-
ratories agrees with Monte Carlo simulations of the
primary cosmic ray flux interacting with the Earth's
atmosphere and producing a secondary atmospheric
neutrino flux
2
.
Although atmospheric neutrinos are the primary
background in searching for astrophysical neutrinos,
they are very useful for two reasons. Atmospheric
neutrino physics can be studied up to PeV energies.
The measurement of more than 50,000 events per
year in an energy range from 500 GeV to 500 TeV
will make IceCube a unique instrument to make pre-
cise comparisons of atmospheric neutrinos with
model predictions. At energies beyond 100 TeV a
harder neutrino spectrum may emerge which would
be a signature of an extraterrestrial flux. Atmos-
pheric neutrinos also give the opportunity to cali-
brate the detector. The absence of such a calibration
beam at higher energies poses a difficult challenge
for detectors at energies targeting the cosmogenic
neutrino flux.
Fig. 1 Schematic view of IceCube. Fifty-nine of 86 strings are in
operation since 2009.
2
A. Karle et al., IceCube
II. DETECTOR AND CONSTRUCTION STA-
TUS
IceCube is designed to detect muons and cascades
over a wide energy range. The string spacing was
chosen in order to reliably detect and reconstruct
muons in the TeV energy range and to precisely
calibrate the detector using flashing LEDs and at-
mospheric muons.
The optical properties of the
South Pole Ice have been measured with various
calibration devices
3
and are used for modeling the
detector response to charged particles. Muon recon-
struction algorithms
4
allow measuring the direction
and energy of tracks from all directions.
In its final configuration, the detector will consist
of 86 strings reaching a depth of 2450 m below the
surface. There are 60 optical sensors mounted on
each string equally spaced between 1450m and
2450m depth with the exception of the six Deep
Core strings on which the sensors are more closely
spaced between 1760m and 2450m. In addition there
will be 320 sensors deployed in 160 IceTop tanks on
the surface of the ice directly above the strings. Each
sensor consists of a 25cm photomultiplier tube
(PMT), connected to a waveform recording data ac-
quisition circuit capable of resolving pulses with
nanosecond precision and having a dynamic range of
at least 250 photoelectrons per 10ns. With the most
recent construction season ending in February 2008,
half of the IceCube array has been deployed.
The detector is constructed by drilling holes in
the ice, one at a time, using a hot water drill. Drilling
is immediately followed by deployment of a detector
string into the water-filled hole. The drilling of a
hole to a depth of 2450m takes about 30 hours. The
subsequent deployment of the string typically takes
less than 10 hours. The holes typically freeze back
within 1-3 weeks. The time delay between two sub-
sequent drilling cycles and string deployments was
in some cases shorter than 50 hours. By the end of
February 2009, 59 strings and IceTop stations had
been deployed. We refer to this configuration as
IC59. Once the strings are completely frozen in the
commissioning can start. Approximately 99% of the
deployed DOMs have been successfully commis-
sioned. The 40-string detector configuration (IC40)
has been in operation from May 2008 to the end of
April 2009.
III. MUONS AND NEUTRINOS
At the depth of IceCube, the event rate from
downgoing atmospheric muons is close to 6 orders
of magnitude higher than the event rate from atmos-
pheric neutrinos. Fig. 2 shows the observed muon
rate (IC22) as a function of the zenith angle
5
.
IceCube is effective in detecting downward going
muons.
A first measurement of the muon energy
spectrum is provided in the references
6
.
A good angular resolution of the experiment is the
basis for the zenith angle distribution and much more
so for the search of point sources of neutrinos from
galactice sources, AGNs or GRBs. Figure 3 shows
the angular resolution of IceCube for several detector
configurations based on high quality neutrino event
selections as used in the point source search for
IC40
7
.
The median angular resolution of IC40
achieved is already 0.7°, the design parameter for the
full IceCube.
The muon flux serves in many ways also as a
calibration tool. One method to verify the angular
resolution and absolute pointing of the detector uses
the Moon shadow of cosmic rays.
The Moon
reaches an elevation of about 28° above the horizon
at the South Pole. Despite the small altitude of the
Moon, the event rate and angular resolution of
IceCube are sufficient to measure the cosmic ray
shadow of the Moon by mapping the muon rate in
the vicinity of the Moon. The parent air showers
have an energy of typically 30 TeV, well above the
energy where magnetic fields would pose a signifi-
Fig. 2 Muon rate in IceCube as a function of zenith angle
5
.The
data agree with the detector simulation which includes atmos-
pheric neutrinos, atmospheric muons, and coincident cosmic ray
muons (two muons erroneously reconstructed as a single track.)
Fig. 3 The angular resolution function of different IceCube con-
figurations is shown for two neutrino energy ranges samples from
an E
-2
energy spectrum.
PROCEEDINGS OF 31
st
ICRC, ŁÓDŹ 2009
3
cant deviation from the direction of the primary
particles. Fig. 4 shows a simple declination band
with bin size optimized for this analysis. A deficit of
~900 events (~4.2σ) is observed on a background of
~28000 events in 8 months of data taking. The defi-
cit is in agreement with expectations and confirms
the assumed angular resolution and absolute point-
ing.
The full IceCube will collect of order 50 000 high
quality atmospheric neutrinos per year in the TeV
energy range. A detailed understanding of the re-
sponse function of the detector at analysis level is the
foundation for any neutrino flux measurement. We
use the concept of the neutrino effective area to
describe the response function of the detector with
respect to neutrino flavor, energy and zenith angle.
The neutrino effective area is the equivalent area for
which all neutrinos of a given neutrino flux imping-
ing on the Earth would be observed. Absorption ef-
fects of the Earth are considered as part of the detec-
tor and folded in the effected area.
Figure 5 provides an overview of effective areas
for various analyses that are presented at this confer-
ence. First we note that the effective area increases
strongly in the range from 100 GeV to about 100
TeV. This is due to the increase in the neutrino-
nucleon cross-section and, in case of the muons, the
workhorse of high energy neutrino astronomy, due to
the additional increase of the muon range. Above
about 100 GeV, the increase slows down because of
radiative energy losses of muons.
The IC22
8
and IC80 as well as IC86 (IC80+6
Deep core) atmospheric ν
μ
area are shown for upgo-
ing neutrinos. The shaded area (IC22) indicates the
range from before to after quality cuts. The effective
area of IC40 point source analysis
7
is shown for all
zenith angles. It combines the upward neutrino sky
(predominantly energies < 1PeV) with downgoing
neutrinos (predominantly >1 PeV). Also shown is
the all sky ν
μ
+ ν
τ
area of IC80.
The ν
e
effective area is shown for the current
IC22 contained cascade analysis
9
as well as the IC22
extremely high energy (EHE) analysis
10
. It is inter-
esting to see how two entirely different analysis
techniques match up nicely at the energy transition
of about 5 PeV.
The cascade areas are about a factor of 20 smaller
than the ν
μ
areas, primarily because the muon range
allows the detection of neutrino interactions far out-
side the detector, increasing the effective detector
volume by a large factor. However, the excellent
energy resolution of contained cascades will benefit
the background rejection of any diffuse analysis, and
makes cascades a competitive detection channel in
the detector where the volume grows faster than the
area with the growing number of strings.
The figure illustrates why IceCube, and other
large water/ice neutrino telescopes for that matter,
can do physics over such a wide energy range. Un-
like typical air shower cosmic ray or gamma ray de-
tectors, the effective area increases by about 8 orders
of magnitude (10
-4
m
2
to 10
+4
m
2
) over an energy
range of equal change of scale (10 GeV to 10
9
GeV).
The analysis at the vastly different energy scales re-
Fig. 4 4.2σ deficit of events from direction of Moon in the
IceCube 40-string detector confirms pointing accuracy.
Fig. 6 The energy resolution for muons is approximately 0.3 in
log(energy) over a wide energy range
Fig. 5 The neutrino effective area is shown for a several IceCube
configurations (IC22, IC 40, IC86), neutrino flavors, energy
ranges and analysis levels (trigger, final analysis).
4
A. Karle et al., IceCube
quires very different approaches, which are pre-
sented in numerous talks in the parallel sessions
11, 45
.
The measurement of atmospheric neutrino flux
requires a good understanding of the energy re-
sponse. The energy resolution for muon neutrinos in
the IC22 configuration is shown in Fig. 6
8
. Over a
wide energy range (1 – 10000 TeV) the energy reso-
lution is ~0.3 in log(energy). This resolution is
largely dominated by the fluctuations of the muon
energy loss over the path length of 1 km or less.
IV. ATMOSPHERIC NEUTRINOS AND THE
SEARCH FOR ASTROPHYSICAL NEUTRINOS
We have discussed the effective areas, as well as
the angular and energy resolution of the detector.
Armed with these ingredients we can discuss some
highlights of neutrino measurements and astrophysi-
cal neutrino searches.
Figure 7 shows a preliminary measurement ob-
tained with the IC22 configuration. An unfolding
procedure has been applied to extract this neutrino
flux. Also shown is the atmospheric neutrino flux as
published previously based on 7 years of
AMANDA-II data. The gray shaded area indicates
the range of results obtained when applying the pro-
cedure to events that occurred primarily in the top or
bottom of the detector. The collaboration is devoting
significant efforts to understand and reduce system-
atic uncertainties as the statistics increases. The data
sample consists of 4492 high quality events with an
estimated purity of well above 95%. Several atmos-
pheric neutrino events are observed above 100 TeV,
pushing the diffuse astrophysical neutrino search
gradually towards the PeV energy region and higher
sensitivity. A look at the neutrino effective areas in
Fig. 5 shows that the full IceCube with 86 strings
will detect about one order of magnitude more
events: ~50000 neutrinos/year.
The search for astrophysical neutrinos is summa-
rized in Fig. 8. While the figure focuses on diffuse
fluxes, it is clear that some of these diffuse fluxes
may be detected as point sources. Some examples of
astrophysical flux models that are shown include
AGN Blazars
46
, BL Lacs
47
, Pre-cursor GRB models
and Waxman Bahcall bound
48
and Cosmogenic neu-
Fig. 7 Unfolded muon neutrino spectrum
8
averaged over zenith
angle, is compared to simulation and to the AMANDA result.
Data are taken with the 22 string configuration.
Fig. 8 Measured neutrino atmospheric neutrino fluxes from AMANDA and IceCube are shown together with a number of models for
astrophysical neutrinos and several limits by IceCube and other experiments
PROCEEDINGS OF 31
st
ICRC, ŁÓDŹ 2009
5
trinos
49
.
The following limits are shown for AMANDA
and IceCube:
• AMANDA-II, 2000-2006, atmospheric muon
neutrino flux
50
• IceCube-22 string, atmospheric neutrinos,
(preliminary)
8
• AMANDA-II, 2000-2003, diffuse E
-2
muon
neutrino flux limit
51
• AMANDA-II, 2000-2002, all flavors, not con-
tained events, PeV to EeV, E
-2
flux limit
52
• AMANDA-II, 2000-2004, cascades, contained
events, E
-2
flux limit
53
• IceCube-40, muon neutrinos, throughgoing
events, preliminary sensitivity
29
• IceCube-22, all flavor, throughgoing, downgo-
ing, extremely high energies (10 PeV to
EeV)
10
Also shown are a few experimental limits from
other experiments, including Lake Baikal
54
(diffuse,
not contained), and at higher energies some differen-
tial limits by RICE, Auger and at yet higher energies
energies from ANITA.
The skymap in Fig. 9 shows the probability for a
point source of high-energy neutrinos. The map was
obtained from 6 months of data taken with the 40
string configuration of IceCube. This is the first re-
sult obtained with half of IceCube instrumented.
The “hottest spot” in the map represents an excess of
7 events, which has a post-trial significance of 10
-4.4
.
After taking into account trial factors, the probability
for this event to happen anywhere in the sky map is
not significant. The background consists of 6796
neutrinos in the Northern hemisphere and 10,981
down-going muons rejected to the 10
-5
level in the
Southern hemisphere. The energy threshold for the
Southern hemisphere increases with increasing ele-
vation to reject the cosmic ray the muon background
by up to a factor of ~10
-5
. The energy of accepted
downgoing muons is typically above 100 TeV.
This unbinned analysis takes the angular resolu-
tion and energy information on an event-by-event
basis into account in the significance calculation.
The obtained sensitivity and discovery potential is
shown for all zenith angles in the figure.
V. SEARCH FOR DARK MATTER
IceCube performs also searches for neutrinos pro-
duced by the annihilation of dark matter particles
gravitationally trapped at the center of the Sun and
the Earth. In searching for generic weakly interacting
massive dark matter particles (WIMPs) with spin-
independent interactions with ordinary matter,
IceCube is only competitive with direct detection
experiments if the WIMP mass is sufficiently large.
On the other hand, for WIMPs with mostly spin-
dependent interactions, IceCube has improved on the
previous best limits obtained by the SuperK experi-
ment using the same method. It improves on the best
limits from direct detection experiments by two or-
ders of magnitude. The IceCube limit as well as a
limit obtained with 7 years of AMANDA are shown
in the figure. It rules out supersymmetric WIMP
models not excluded by other experiments. The in-
stallation of the Deep Core of 6 strings as shown in
Fig. 1 will greatly enhance the sensitivity of IceCube
for dark matter. The projected sensitivity in the
range from 50 GeV to TeV energies is shown in Fig.
11. The Deep Core is an integral part of IceCube
and relies on the more closely spaced nearby strings
for the detection of low energy events as well as on a
highly efficient veto capability against cosmic ray
muon backgrounds using the surrounding IceCube
strings.
Fig, 9: The map shows the probability for a point source of high-
energy neutrinos on the atmospheric neutrino background. The
map was obtained by operating IceCube with 40 strings for half a
year
7
. The “hottest spot” in the map represents an excess of 7
events. After taking into account trial factors, the probability for
this event to happen anywhere in the sky map is not significant.
The background consists of 6796 neutrinos in the Northern hemi-
sphere and 10,981 down-going muons rejected to the 10
-5
level in
the Southern hemisphere.
Fig. 10 Upper limits to E
-2
-type astrophysical muon neutrino spec-
tra are shown for the newest result of ½ year of IC40 and a num-
ber of earlier results obtained by IceCube and other experiments.
6
A. Karle et al., IceCube
VI. COSMIC RAY MUONS AND HIGH ENERGY
COSMIC RAYS
IceCube is a huge cosmic-ray muon detector and
the first sizeable detector covering the Southern
hemisphere. We are using samples of several billion
downward-going muons to study the enigmatic large
and small scale anisotropies recently identified in the
cosmic ray spectrum by Northern detectors, namely
the Tibet array
55
and the Milagro array
56
. Fig. 12
shows the relative deviations of up to 0.001 from the
average of the Southern muon sky observed with the
22-string array
11
. A total of 4.3 billion events with a
median energy of 14 TeV were used. IceCube data
shows that these anisotropies persist at energies in
excess of 100 TeV ruling out the sun as their origin.
Having extended the measurement to the Southern
hemisphere should help to decipher the origin of
these unanticipated phenomena.
IceCube can detect events with energies ranging
from 0.1 TeV to beyond 1 EeV, neutrinos and cos-
mic ray muons.
The surface detector IceTop consists of ice Cher-
enkov tank pairs. Each IceTop station is associated
with an IceCube string. With a station spacing of
125 m, it is efficient for air showers above energies
of 1 PeV. Figure 13 shows an event display of a
very high-energy (~EeV) air shower event. Hits are
recorded in all surface detector stations and a large
number of DOMs in the deep ice. Based on a pre-
liminary analysis some 2000 high-energy muons
would have reached the deep detector in this event if
the primary was a proton and more if it was a nu-
cleus. With 1 km
2
surface area, IceTop will acquire
a sufficient number of events in coincidence with the
in-ice detector to allow for cosmic ray measurements
up to 1 EeV. The directional and calorimetric meas-
urement of the high energy muon component with
the in-ice detector and the simultaneous measure-
ment of the electromagnetic particles at the surface
with IceTop will enable the investigation of the en-
ergy spectrum and the mass composition of cosmic
rays.
Events with energies above one PeV can deposit
an enormous amount of light in the detector. Figure
14 shows an event that was generated by flasher
pulse produced by an array of 12 UV LEDs that are
mounted on every IceCube sensor. The event pro-
duces an amount of light that is comparable with that
of an electron cascade on the order of 1 PeV. Pho-
tons were recorded on strings at distances up to 600
m from the flasher. The events are somewhat
brighter than previously expected because the deep
ice below a depth of 2100m is exceptionally clear.
Fig. 13 A very high energy cosmic ray air shower ob-
served both with the surface detector IceTop and the in-
ice detector string array.
Fig. 11 The red boxes show the upper limits at 90% confi-
dence level on the spin-dependent interaction of dark matter
particles with ordinary matter
18, 20
. The two lines represent the
extreme cases where the neutrinos originate mostly from
heavy quarks (top line) and weak bosons (bottom line) pro-
duced in the annihilation of the dark matter particles. Also
shown is the reach of the complete IceCube and its DeepCore
extension after 5 years of observation of the sun. The shaded
area represents supersymmetric models not disfavored by di-
rect searches for dark matter. Also shown are previous limits
from direct experiments and from the Superkamiokande ex-
periment.
Fig.12 The plot shows the skymap of the relative intensity in
the arrival directions of 4.3 billion muons produced by cosmic
ray interactions with the atmosphere with a median energy of
14 TeV; these events were reconstructed with an average angu-
lar resolution of 3 degrees. The skymap is displayed in equa-
torial coordinates.
PROCEEDINGS OF 31
st
ICRC, ŁÓDŹ 2009
7
The scattering length is substantially larger than in
average ice at the depth of AMANDA.
Extremely high energy (EHE) events, above about
1 PeV, are observed near and above the horizon. At
these energies, the Earth becomes opaque to neutri-
nos and one needs to change the search strategy. In
an optimized analysis, the neutrino effective area
reaches about 4000m
2
for IC80 at 1 EeV. IC80 can
therefore test optimistic models of the cosmogenic
neutrino flux. IceCube is already accumulating an
exposure with the current data that makes detection
of a cosmogenic neutrino event possible.
IceCube construction is on schedule to completion
in February 2011. The operation of the detector sta-
ble and data analysis of recent data allows a rapid in-
crease of the sensitivity and the discovery potential
of IceCube.
VII. ACKNOWLEDGEMENTS
We acknowledge the support from the following
agencies: U.S. National Science Foundation-Office
of Polar Program, U.S. National Science Foundation-
Physics Division, University of Wisconsin Alumni
Research Foundation, U.S. Department of Energy,
and National Energy Research Scientific Computing
Center, the Louisiana Optical Network Initiative
(LONI) grid computing resources; Swedish Research
Council, Swedish Polar Research Secretariat, and
Knut and Alice Wallenberg Foundation, Sweden;
German Ministry for Education and Research
(BMBF), Deutsche Forschungsgemeinschaft (DFG),
Germany; Fund for Scientific Research (FNRS-
FWO), Flanders Institute to encourage scientific and
technological research in industry (IWT), Belgian
Federal Science Policy Office (Belspo); the Nether-
lands Organisation for Scientific Research (NWO);
M. Ribordy acknowledges the support of the SNF
(Switzerland); A. Kappes and A. Groß acknowledge
support by the EU Marie Curie OIF Program.
REFERENCES
[1] E. Waxman and J. Bahcall, Phys. Rev. D 59 (1998)
023002
[2] T. Gaisser,
Cosmic Rays and Particle Physics
Cambridge University Press 1991
[3] M.Ackermann et al., J. Geophys. Res.
111
, D13203
(2006)
[4] J.Ahrens et al., Nucl. Inst. Meth. A
524
, 169 (2004)
[5] P.
Berghaus
for
the
IceCube
Collaboration,
proc.
Proc.
for
the
XV
International
Symposium
on
Very
High
Energy
Cosmic
Ray
Interactions
(ISVHECRI
2008),
Paris,
France,
arXiv:0902.0021
[6] P. Berghaus
et al.
, Direct Atmospheric Muon Energy
Spectrum Measurement with IceCube (IceCube col-
laboration), Proc. of the 31
st
ICRC HE1.5; astro-
ph.HE/arXiv: 0909.0679
[7] J. Dumm
et al.
, Likelihood Point-Source Search with
IceCube (IceCube collaboration), Proc. of the 31
st
ICRC OG2.5D.
[8] D. Chirkin
et al.
, Measurement of the Atmospheric
Neutrino Energy Spectrum with IceCube (IceCube
collaboration), Proc. of the 31
st
ICRC HE2.2
[9] J. Kiryluk
et al.
, First Search for Extraterrestrial Neu-
trino-Induced Cascades with IceCube (IceCube col-
laboration), Proc. of the 31
st
ICRC OG2.5; astro-
ph.HE/arXiv: 0909.0989
[10] K. Mase
et al.
, The Extremely High-Energy Neutrino
Search with IceCube (IceCube collaboration), Proc. of
the 31
st
ICRC HE1.4
[11] R. Abbasi and P. Desiati, JC Díaz Vélez
et al.
, Large-
Scale Cosmic-Ray Anisotropy with IceCube (IceCube
collaboration), Proc. of the 31
st
ICRC SH3.2; astro-
ph.HE/0907.0498
[12] K. Andeen
et al.
, Composition in the Knee Region
with SPASE-AMANDA (IceCube collaboration),
Proc. of the 31
st
ICRC HE1.2
[13] C. Rott et al., Atmospheric Neutrino Oscilaton
Measurement with IceCube (IceCube collaboration),
Proc. of the 31
st
ICRC HE2.3
[14] M. Baker
et al.
, IceCube Time-Dependent Point-
Source Analysis Using Multiwavelength Information
(IceCube collaboration), Proc. of the 31
st
ICRC OG2.5
[15] J. Bazo Alba
et al.
, Search for Neutrino Flares from
Point Sources with IceCube (IceCube collaboration,
to
appear in
Proc. of the 31
st
ICRC OG2.5
[16] M. Bissok
et al.,
Sensor Development and Calibration
for Acoustic Neutrino Detection in Ice (IceCube col-
laboration), Proc. of the 31
st
International Cosmic Ray
Conference, Lodz, Poland (2009) HE2.4; astro-
ph.IM/09073561
[17] D. Boersma
et al.
, Moon-Shadow Observation by
IceCube (IceCube collaboration), Proc. of the 31
st
ICRC OG2.5
Figure 14: A flasher event in IceCube. Such events,
produced by LEDs built in the DOMs, can be used for
calibration purposes.
8
A. Karle et al., IceCube
[18] J. Braun and D. Hubert
et al.
, Searches for WIMP
Dark Matter from the Sun with AMANDA (IceCube
collaboration), Proc. of the 31
st
ICRC HE2.3; astro-
ph.HE/09061615
[19] M. D’Agostino
et al.
, Search for Atmospheric Neu-
trino-Induced Cascades (IceCube collaboration), Proc.
of the 31
st
ICRC HE2.2; ; astro-
ph.HE/arXiv:0910.0215
[20] M. Danninger and K. Han
et al.
, Search for the Ka-
luza-Klein Dark Matter with the AMANDA/IceCube
Detectors (IceCube collaboration), Proc. of the 31
st
ICRC HE2.3; astro-ph.HE/0906.3969
[21] F. Descamps
et al.
, Acoustic Detection of High-
Energy Neutrinos in Ice (IceCube collaboration), Proc.
of the 31
st
ICRC HE2.4
[22] M. Duvoort
et al.
, Search for GRB Neutrinos via a
(Stacked) Time Profile Analysis (IceCube collabor-
ation), Proc. of the 31
st
ICRC OG2.4
[23] S. Euler
et al.
, Implementation of an Active Veto
against Atmospheric Muons in IceCube DeepCore
(IceCube collaboration), Proc. of the 31
st
ICRC OG2.5
[24] T. Feusels
et al.
, Reconstruction of IceCube Coinci-
dent Events and Study of Composition-Sensitive Ob-
servables Using Both the Surface and Deep Detector
(IceCube collaboration), Proc. of the 31
st
ICRC HE1.3
[25] A. Francowiak
et al.
, Optical Follow-Up of High-
Energy Neutrinos Detected by IceCube (IceCube and
ROTSE collaborations), Proc. of the 31
st
ICRC OG2.5
[26] R. Franke
et al.
, Neutrino Triggered High-Energy
Gamma-Ray Follow-Up with IceCube (IceCube col-
laboration), Proc. of the 31
st
ICRC OG2.5
[27] L. Gerhardt
et al.
, Study of High p_T Muons in
IceCube (IceCube collaboration), Proc. of the 31
st
ICRC HE1.5; astro-ph.HE/arXiv 0909.0055
[28] D. Grant
et al.
, Fundamental Neutrino Measurements
with IceCube DeepCore (IceCube collaboration),
Proc. of the 31
st
ICRC HE2.2
[29] K. Hoshina
et al.
, Search for Diffuse High-Energy
Neutrinos with IceCube (IceCube collaboration), Proc.
of the 31
st
ICRC OG2.5
[30] W. Huelsnitz
et al.,
Search for Quantum Gravity with
IceCube and High-Energy Atmospheric Neutrinos
(IceCube collaboration), Proc. of the 31
st
ICRC HE2.3
[31] A. Ishihara
et al.
, Energy-Scale Calibration Using
Cosmic-Ray Induced Muon Bundles Measured by the
IceCube Detector with IceTop Coincident Signals
(IceCube collaboration), Proc. of the 31
st
ICRC HE1.5
[32] A. Kappes
et al.
, Searches for Neutrinos from GRBs
with the IceCube 22-String Detector and Sensitivity
Estimates for the Full Detector (IceCube collabor-
ation), Proc. of the 31
st
ICRC OG2.4
[33] F. Kislat
et al.
, A First All-Particle Cosmic-Ray En-
ergy Spectrum from IceTop (IceCube collaboration),
Proc. of the 31
st
ICRC HE1.2
[34] M. Kowarik
et al.
, Supernova Search with the
AMANDA / IceCube Neutrino Telescopes (IceCube
collaboration), Proc. of the 31
st
ICRC OG2.2
[35] D. Lennarz
et al.
, Search for High-Energetic Neutrinos
from Supernova Explosions with AMANDA (IceCube
collaboration), Proc. of the 31
st
ICRC OG2.5; astro-
ph.HE/09074621
[36] K. Meagher
et al.
, Search for Neutrinos from GRBs
with IceCube (IceCube collaboration), Proc. of the 31
st
ICRC OG2.4
[37] E. Middell
et al.
, Improved Reconstruction of Cas-
cade-Like Events (IceCube collaboration), Proc. of the
31
st
ICRC OG2.5
[38] C. Portello-Roucelle
et al.
, IceCube / AMANDA
Combined Analyses for the Search of Neutrino Sour-
ces at Low Energies (IceCube collaboration), Proc. of
the 31
st
ICRC OG2.5
[39] C. Rott
et al.
, Results and Prospects of Indirect
Searches fo Dark Matter with IceCube (IceCube col-
laboration), Proc. of the 31
st
ICRC HE2.3
[40] B. Ruzybayev
et al.
, Small Air Showers in IceCube
(IceCube collaboration), Proc. of the 31
st
ICRC HE1.1
[41] A. Schukraft and J. Hülß
et al.
, AMANDA 7-Year
Multipole Analysis (IceCube collaboration), Proc. of
the 31
st
ICRC OG2.5; astro-ph.HE/09063942
[42] S. Seo
et al.
, Search for High-Energy Tau Neutrinos in
IceCube (IceCube collaboration), Proc. of the 31
st
ICRC OG2.5
[43]
43
A. Silvestri
et al.
, Search for Ultra–High-Energy
Neutrinos with AMANDA (IceCube collaboration),
Proc. of the 31
st
ICRC OG2.5
[44] S. Tilav
et al.
, Atmospheric Variations as Observed by
IceCube (IceCube collaboration), Proc. of the 31
st
ICRC HE1.1
[45] C. Wiebusch
et al.
, Physics Capabilities of the
IceCube DeepCore Detector (IceCube collaboration),
Proc. of the 31
st
ICRC OG2.5; astro-ph.IM/09072263
[46] F. W. Stecker. Phys. Rev. D, 72(10):107301, 2005
[47] A. Mücke et al. Astropart. Phys., 18:593, 2003
[48] S. Razzaque, P. Meszaros, and E. Waxman. Phys.
Rev. D, 68(8):083001, 2003
[49] R. Engel, D. Seckel, and T. Stanev. Phys. Rev. D,
64(9):093010, 2001
[50] A. Achterberg
et al
. (IceCube collaboration),
Phys.Rev.D
79
102005, 2009
[51] A. Achterberg
et al
. (IceCube collaboration),, Phys.
Rev. D
76
042008 (2007);
[52] Ackerman
et al
. (IceCube collaboration), Astrophys.
Jour.
675
2 1014-1024 (2008); astro-ph/07113022
[53] A. Achterberg,
et al
., Astrophys. Jour.
664
397-410
(2007); astro-ph/0702265v2
[54] A.V.Avrorin
et al.
, Astronomy Letters, Vol.35, No.10,
pp.651-662, 2009; arXiv:0909.5562
[55] M. Amenomori et al. [The Tibet AS-Gamma
Collaboration], Astrophys. J. 633, 1005 (2005)
arXiv:astroph/0502039
[56] A.A. Abdo et al., Phys. Rev. Lett. 101: 221101
(2008), arXiv:0801.3827
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
All-Sky Point-Source Search with 40 Strings of IceCube
Jon Dumm
∗
, Juan A. Aguilar
∗
, Mike Baker
∗
, Chad Finley
∗
, Teresa Montaruli
∗
,
for the IceCube Collaboration
†
∗
Dept. of Physics, University of Wisconsin, Madison, WI, 53706, USA
†
See the special section of these proceedings.
Abstract. During 2008-09, the IceCube Neutrino
Observatory was operational with 40 strings of
optical modules deployed in the ice. We describe
the search for neutrino point sources based on a
maximum likelihood analysis of the data collected
in this configuration. This data sample provides the
best sensitivity to high energy neutrino point sources
to date. The field of view is extended into the down-
going region providing sensitivity over the entire
sky. The 22-string result is discussed, along with
improvements leading to updated angular resolution,
effective area, and sensitivity. The improvement in
the performance as the number of strings is increased
is also shown.
Keywords: neutrino astronomy
I. INTRODUCTION
The primary goal of the IceCube Neutrino Obser-
vatory is the detection of high energy astrophysical
neutrinos. Such an observation could reveal the origins
of cosmic rays and offer insight into some of the
most energetic phenomena in the Universe. In order to
detect these neutrinos, IceCube will instrument a cubic
kilometer of the clear Antarctic ice sheet underneath the
geographic South Pole with an array of 5,160 Digital
Optical Modules (DOMs) deployed on 86 strings from
1.5–2.5 km deep. This includes six strings with a smaller
DOM spacing and higher quantum efficiency compris-
ing DeepCore, increasing the sensitivity to low energy
neutrinos <∼ 100 GeV. IceCube also includes a surface
array (IceTop) for observing extensive air showers of
cosmic rays. Construction began in the austral sum-
mer 2004–05, and is planned to finish in 2011. Each
DOM consists of a 25 cm diameter Hamamatsu photo-
multiplier tube, electronics for waveform digitization,
and a spherical, pressure-resistant glass housing. The
DOMs detect Cherenkov photons induced by relativistic
charged particles passing through the ice. In particular,
the directions of muons (either from cosmic ray showers
above the surface, or neutrino interactions within the ice
or bedrock) can be well reconstructed from the track-like
pattern and timing of hit DOMs.
The 22-string results presented in the discussion are
from a traditional up-going search. In such a search,
neutrino telescopes use the Earth as a filter for the large
background of atmospheric muons, leaving only an irre-
ducible background of atmospheric neutrinos below the
horizon. These have a softer spectrum (∼ E
−3.6
above
100 GeV) than astrophysical neutrinos which originate
from the decays of particles accelerated by the first order
Fermi mechanism and thus are expected to have an E
−2
spectrum. This search extends the field of view above
the horizon into the large background of atmospheric
muons. In order to reduce this background, strict cuts on
the energy of events need to be applied. This makes the
search above the horizon primarily sensitive to extremely
high energy (> PeV) sources.
II. METHODOLOGY
An unbinned maximum likelihood analysis, account-
ing for individual reconstructed event uncertainties and
energy estimators, is used in IceCube point source anal-
yses. A full description can be found in Braun et al. [1].
This method improves the sensitivity to astrophysical
sources over directional clustering alone by leveraging
the event energies in order to separate hard spectrum
signals from the softer spectrum of the atmospheric
neutrino or muon background. For each tested direction
in the sky, the best fit is found for the number of signal
events n
s
over background and the spectral index of
a power law γ of the excess events. The likelihood
ratio of the best-fit hypothesis to the null hypothesis
(n
s
= 0) forms the test statistic. The significance of
the result is evaluated by performing the analysis on
scrambled data sets, randomizing the events in right
ascension but keeping all other event properties fixed.
Uniform exposure in right ascension is ensured as the
detector rotates completely each day, and the location
at 90
◦
south latitude gives a uniform background for
each declination band. Events that are nearly vertical
(declination < −85
◦
or > 85
◦
) are left out of the
analysis, since scrambling in right ascension does not
work in the polar regions.
Two point-source searches are performed. The first is
an all-sky search where the maximum likelihood ratio
is evaluated for each direction in the sky on a grid,
much finer than the angular resolution. The significance
of any point on the grid is determined by the fraction
of scrambled data sets containing at least one grid point
with a log likelihood ratio higher than the one observed
in the data. This fraction is the post-trial p-value for
the all-sky search. Because the all-sky search includes
a large number of effective trials, the second search is
restricted to the directions of a priori selected sources
of interest. The post-trial p-value for this search is again
2
J. DUMM et al. ICECUBE POINT SOURCE ANALYSIS
calculated by performing the same analysis on scrambled
data sets.
III. EVENT SELECTION
Forty strings of IceCube were operational from April
2008 to May 2009 with ∼ 90% duty cycle after a good
run selection based on detector stability. The ∼ 3× 10
10
triggered events per year are first reduced to ∼ 1 × 10
9
events using low-level likelihood reconstructions and
energy estimators as part of an online filtering system
on site. These filtered events are sent over satellite to a
data center in the North for further processing, including
higher-level likelihood reconstructions for better angular
resolution. Applying the analysis-level cuts (described
below) that optimize the sensitivity to point sources
finally yields a sample of ∼ 3 × 10
4
events. Due to
offline filtering constraints, 144 days of livetime were
used to design the analysis strategy and finalize event
selection, keeping the time and right ascension of the
events blinded. This represents about one-half of the
final 40-string data sample. Because the northern sky
and southern sky present very different challenges, two
techniques are used to reduce the background due to
cosmic ray muons.
For the northern sky, the Earth filters out atmospheric
muons. Only neutrinos can penetrate all the way through
the Earth and interact near the detector to create up-
going muons. However, since down-going atmospheric
muons trigger the detector at ∼ 1 kHz, even a small
fraction of mis-reconstructed events contaminates the
northern sky search. Events may be mis-reconstructed
due to random noise or light from muons from indepen-
dent cosmic ray showers coincident in the same readout
window of ± 10 µs. Therefore, strict event selection
is still required to reject mis-reconstructed down-going
events. This selection is based on track-like quality
parameters (the reduced likelihood of the track fit and
the directional width of the likelihood space around
the best track fit [2]), a likelihood ratio between the
best up-going and down-going track solution, and a
requirement that the event’s set of hits can be split
into two parts which both reconstruct as nearly-upgoing.
Although the track-like quality parameters have very
little declination dependence, these last two parameters
only work for selecting up-going neutrino candidates
and remove down-going events. This event selection
provides an optimal sensitivity to sources of neutrinos
in the TeV–PeV energy range.
In the southern sky, energy estimators were used to
separate the large number of atmospheric muons from a
hypothetical source of neutrinos with a harder spectrum.
After track-quality selections, similar but tighter than
for the up-going sample, a cut based on an energy
estimator is made until a fixed number of events per
steradian is achieved. Because only the highest energy
events pass the selection, sensitivity is primarily to
neutrino sources at PeV energies and above. Unlike
for the northern sky, which is a ∼ 90% pure sample
ν
/ GeV )
10
(E
log
2
3
4
5
6
7
8
9
(E/GeV)]
10
dP/d[log
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Atmospheric (up-going)
-2
(up-going)
E
-2
(down-going)
E
-1.5
(down-going)
E
Fig. 1: Probability density (P) of neutrino energies at
final cut level for atmospheric and an E
−2
spectrum of
neutrinos averaged over the northern sky and E
−1.5
in
the southern sky.
of neutrino-induced muons, the event sample in the
southern sky is almost entirely well-reconstructed high
energy atmospheric muons and muon bundles.
IV. PERFORMANCE
The performance of the detector and the analysis
is characterized using a simulation of ν
µ
and ν¯
µ
. At-
mospheric muon background is simulated using COR-
SIKA [3]. Muon propagation through the Earth and
ice are done using MMC [4]. A detailed simulation of
the ice [5] propagates the Cherenkov photon signal to
each DOM. Finally, a simulation of the DOM, including
angular acceptance and electronics, yields an output
treated identically to data. For an E
−2
spectrum of
neutrinos the median angular difference between the
neutrino and the reconstructed direction of the muon in
the northern (southern) sky is 0.8
◦
(0.6
◦
). The different
energy distributions in each hemisphere shown in Fig. 1
cause this effect, since the reconstruction performs better
at higher energies. The cumulative point spread functions
for the 22-, 40-, and 80-string configurations of IceCube
are shown in Fig. 2 for two different ranges of energy.
Fig. 3 shows the effective area to an equal-ratio flux
of ν
µ
+ ν¯
µ
. Fig. 4 shows the 40-string sensitivity to
an E
−2
spectrum of neutrinos for 330 days of livetime
and compared to the 22-string configuration of IceCube,
as well as ANTARES sensitivity, primarily relevant for
the southern sky. The 80-string result uses the same
methodology and event selection for the up-going region
as this work.
V. DISCUSSION
The previous season of IceCube data recorded with
the 22-string configuration has already been the subject
of point source searches [7]. The analysis included
5114 atmospheric neutrino events including a contam-
ination of about 5% of atmospheric muons during a
livetime of about 276 days. No evidence was found
for a signal, and the largest significance is located at
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
Fig. 2: The point spread function of the 22-, 40-, and 80-
string IceCube configurations in two energy bins. This
is the cumulative distribution of the angular difference
between the neutrino and recostructed muon track using
simulated neutrinos. The large improvement between the
22- and 40-string point spread function at high energies
is due to an improvement in the reconstruction, which
now uses charge information.
ν
/ GeV )
10
( E
log
2
3
4
5
6
7
8
9
]
2
Effective Area [m
μ
ν
+
μ
ν
-3
10
-2
10
-1
10
1
10
2
10
3
10
4
10
zenith range (0
°
, 30
°
)
zenith range (30
°
, 60
°
)
zenith range (60
°
, 90
°
)
zenith range (90
°
, 120
°
)
zenith range (120
°
, 150
°
)
zenith range (150
°
, 180
°
)
Fig. 3: The IceCube 40-string solid-angle-averaged ef-
fective area to an equal-ratio flux of ν
µ
and ν¯
µ
, recon-
structed within 2
◦
of the true direction. The different
shapes of each zenith band are due to a combination of
event selection and how much of the Earth the neutrinos
must travel through. Since the chance of a neutrino
interacting increases with its energy, in the very up-going
region high energy neutrinos are absorbed in the Earth.
Only near the horizon do muons from > PeV neutrinos
often reach IceCube. Above the horizon, low energy
events are removed by cuts, and in the very down-going
region effective area for high energies is lost due to
insufficient target material.
153.4
◦
r.a., 11.4
◦
dec. Accounting for all trial factors,
this is consistent with the null hypothesis at the 2.2 σ
level. The events in the most significant location did
not show a clear time dependent pattern, and these
coordinates have been included in the catalogue of
sources for the 40-string analysis.
Declination [degrees]
-80 -60 -40 -20
0
20
40
60
80
]
-1
s
-2
dN/dE [TeV cm
2
E
-12
10
-11
10
-10
10
-9
10
22 strings 275.7 d
40 strings prel. sens. 330 d
IceCube prelim 365 d
ANTARES prel. 365 d
Fig. 4: 40-string IceCube sensitivity for 330 days as
a function of declination to a point source with dif-
ferential flux
dΦ
dE
= Φ
0
(E/TeV)
−2
. Specifically, Φ
0
is
the minimum source flux normalization (assuming E
−2
spectrum) such that 90% of simulated trials result in a
log likelihood ratio log λ greater than the median log
likelihood ratio in background-only trials (log λ = 0).
Comparison are also shown for the 22-string and the
expected performance of the 80-string configuration, as
well as the ANTARES [6] sensitivity.
Since the 22-string analysis, a number of improve-
ments have been achieved. An additional analysis of
the 22-string data optimized for E
−2
and harder spectra
was performed down to −50
◦
declination with a binned
search [8]. These analyses are now unified into one
all-sky search which uses the energy of the events and
extends to −85
◦
declination. Secondly, a new recon-
struction that uses the charge observed in each DOM
performs better, especially on high energy events. Third,
an improved energy estimator, based on the photon
density along the muon track, has a better muon energy
resolution.
With construction more than half-complete, IceCube
is already beginning to demonstrate its potential as an
extraterrestrial neutrino observatory. The latest science
run with 40 strings was the first detector configuration
with one axis the same length as that of the final array.
Horizontal muon tracks reconstructed along this axis
provide the first class of events of the same quality as
those in the finished 80-string detector.
There are now 59 strings of IceCube deployed and
taking data. Further development of reconstruction and
analysis techniques, through a better understanding of
the detector and the depth-dependent properties of the
ice, have continued to lead to improvements in physics
results. New techniques in the southern sky may include
separating muon bundles of cosmic ray showers from
single muons induced by high energy neutrinos. At lower
energies, the identification of starting muon tracks from
neutrinos interacting inside the detector will be helped
with the addition of DeepCore [9].
4
J. DUMM et al. ICECUBE POINT SOURCE ANALYSIS
REFERENCES
[1] J. Braun et al. Methods for point source analysis in high energy
neutrino telescopes. Astropart. Phys. 29, 155, 2006.
[2] Neunhoffer, T. Astropart. Phys., 25, 220. 2006.
[3] D. Heck et al. CORSIKA: A Monte Carlo code to simulate
extensive air showers, FZKA, Tech. Rep., 1998.
[4] D. Chirkin and W. Rhode. Preprint hep-ph/0407075, 2004.
[5] J. Lundberg et al. Nucl. Inst. Meth., vol. A581, p. 619, 2007.
[6] J. A. Aguilar et al. Expected discovery potential and sensitivity to
neutrino point-like sources of the ANTARES neutrino telescope.
in proceedings 30
th
ICRC, Merida. 2007.
[7] R. Abbasi et al. (IceCube Collaboration) First Neutrino Point-
Source Results from IceCube in the 22-String Configuration.
submitted, 2009. astro-ph/09052253
[8] R. Lauer, E. Bernardini. Extended Search for Point Sources of
Neutrinos Below and Above the Horizon. in proceedings of 2
nd
Heidelberg Workshop, 2009, astro-ph/09035434.
[9] C. Wiebusch et al. (IceCube Collaboration), these proceedings.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
IceCube Time-Dependent Point Source Analysis Using
Multiwavelength Information
M. Baker
∗
, J. A. Aguilar
∗
, J. Braun
∗
, J. Dumm
∗
, C. Finley
∗
, T. Montaruli
∗
, S. Odrowski
†
, E. Resconi
†
for the IceCube Collaboration
‡
∗
Dept. of Physics, University of Wisconsin, Madison, WI 53706, USA
†
Max-Planck-Institut fur Kernphysik, D-69177 Heidelberg, Germany
‡
see special section of these proceedings
Abstract
. In order to enhance the IceCube’s sen-
sitivity to astrophysical objects, we have developed
a dedicated search for neutrinos in coincidence
with flares detected in various photon wavebands
from blazars and high-energy binary systems. The
analysis is based on a maximum likelihood method
including the reconstructed position, the estimated
energy and arrival time of IceCube events. After a
short summary of the phenomenological arguments
motivating this approach, we present results from
data collected with 22 IceCube strings in 2007-2008.
First results for the 40-string IceCube configuration
during 2008-2009 will be presented at the conference.
We also report on plans to use long light curves and
extract from them a time variable probability density
function.
Keywords
: Neutrino astronomy, Multiwavelength
astronomy
I. INTRODUCTION
IceCube is a high-energy neutrino observatory cur-
rently under construction at the geographic South Pole.
The full detector will be composed of 86 strings of
60 Digital Optical Modules (DOMs) each, deployed
between 1500 and 2500m below the glacier surface. A
six string Deep Core with higher quantum efficiency
photomultipliers and closer DOM spacing in the lower
detector will enhance sensitivity to low energy neutrinos.
Muons passing through the detector emit Cˇ erenkov light
allowing reconstruction with
? 1
◦
angular resolution
in the full detector and about
1.5
◦
(median) in the
22 string configuration. In this paper we describe the
introduction of a time dependent term to the standard
search for steady emission of neutrinos presented in Ref.
[3]. We apply it in a search for periodic emission of
neutrinos from seven high-energy binary systems and
for a neutrino emission coincident with a catalogue of
flares occurring when IceCube was taking data in its 22
string configuration. We also describe an extension of the
method that uses multi-wavelength (MWL) lightcurves
to characterize neutrino emission.
II. TIME DEPENDENT POINT SOURCE SEARCH
An unbinned maximum likelihood ratio method, using
a test statistic that compares a signal plus background
hypothesis to a background-only one, has been used for
the search for point sources of neutrinos in IceCube [1].
We use the angular and energy distribution of events
as information to characterize the signal with respect to
the background. In the analysis of the 22-string data
we use the number of hit DOMs in an event as an
energy estimator, while for the 40-string configuration
we use a more sophisticated energy estimator based on
the photon density along the muon track. The analysis
method returns a best-fit number of signal events and
spectral index (though with a large error that depends
on the number of events near the celestial coordinate
being tested).
We use the IceCube 22-string upward-going neutrino
event data sample of 5114 events collected in 275
days of livetime between May 31, 2007 and April
5, 2008 (which includes misreconstructed atmospheric
muon contamination of about 5%). Selection cuts are
based on the quality of the reconstruction, on the angular
uncertainty of the track reconstruction (
σ < 3
◦
) and on
other variables such as the number of DOMs hit by the
direct Cˇ erenkov light produced by muons. Fig 1 shows
that the time distribution of these atmospheric neutrino
events is consistent with a flat distribution.
Neutrinos from a point source are expected to cluster
around the direction of the source and to have a spectrum
dN
dE
∝ E
γ
with spectral index
γ ∼ 2
as predicted
by
1
st
order Fermi acceleration mechanisms. On the
other hand, the background of atmospheric neutrinos
is distributed uniformly in right ascension and has an
energy spectrum with
γ ∼ 3.6
above 100 GeV. We
construct a signal probability distribution function (pdf):
S
i
=
1
2 πσ
2
i
e
|
?
xi ?
xs
|
2
2 σ
2
i
∗ E (E
i
|γ ) ∗ T
i
,
(1)
where
σ
i
is the reconstructed angular error of the event
[2],
?x
i
?x
s
the angular separation between the recon-
structed event and the source, E is the energy pdf with
spectral index
γ
, and
T
i
is the time pdf of the event. The
background pdf is given by:
B
i
= B (?x
i
) ∗ E
bkg
(E
i
) ∗
1
L
(2)
where
B (?x
i
)
is the background event density (a func-
tion of the declination of the event),
E
bkg
the energy
2
BAKER
et al.
ICECUBE TIME DEPENDENT POINT SOURCE ANALYSIS
Fig. 1. Time distribution of the 22-string neutrino events.
distribution of the background, and L the livetime.
The background pdf is determined using the data, and
the final p-values for these analyses are obtained by
comparing scrambled equivalent experiments to data.
Scrambled times are drawn from the distribution of
measured atmospheric muon event times, taking one
event per minute to obtain a constant rate.
The analysis method gives more weight to events
which are clustered in space and at energies higher than
expected from the atmospheric background. In this work
we present the results which include for the first time
a time dependent term in the pdf. Results are given
in terms of p-values, or the fraction of the scrambled
samples with a higher test statistic than found for the
data.
III. BINARY SYSTEM PERIODICITY SEARCH
One class of high-energy binary systems, micro-
quasars, includes a compact object with an accretion
disk emitting relativistic jets of matter. Jets are assumed
to accelerate protons, hence
pp
and
pγ
interactions are
possible. The two microquasars LS 5039 (which is out
of the IceCube field of view) and LSI 61 +303 [4] have
been observed to emit TeV gamma-rays modulated with
the orbital phase of the systems. H.E.S.S. detects the
minimum of the photon emission for LS 5039 during
the superior conjunction, where the compact object is
behind the massive star [5]. The gamma ray modulation
can be interpreted as an indication of absorption of
gammas emitted from the compact object. Nonetheless,
the modulation could be very different in neutrinos,
where neutrino production depends on how much matter
is crossed by the proton beam on which interactions and
decays depend. Since we assume that the modulation is
related to the relative position of the accelerator with
respect to the observer, we also include in our search
objects for which no TeV modulation has yet been
observed, using the period obtained from spectroscopic
observations of the visible binary partner. We then leave
the phase as a free parameter to be fit. Due to low
statistics, a Gaussian will be adequate to describe the
Fig. 2. Comparison of discovery potential at 5σ and 50% probability
between the time-integrated and time-dependent methods for LSI +61
303.
time modulation. Hence our time-dependent pdf is:
T
i
=
√
1
2 πσ
w
e
|
φi φ
0
|
2
2 σ
2
w
,
(3)
where
σ
w
is the width of the Gaussian in the period,
φ
i
is the phase of the event and
φ
0
is the phase of peak
emission. The phase takes a value between 0 to 1.
We find that this time-dependent method has a better
discovery potential than the time-integrated analysis if
the sigma of the emission is less than about 20% of
the total period (Fig. 2). Since there are more degrees
of freedom, the time-dependent analysis will perform
worse if neutrinos are emitted over a large fraction of
the period.
We examined seven binary systems, listed in Tab. I,
covering a range of declinations and periods. There was
no evidence of periodicity seen for any of the sources
tested. The most significant result for this search has
a pre-trial p-value of 6%, we expect to see this level
of significance from one of our seven trials in 35%
of scrambled samples, hence we find no evidence for
periodicity.
Object
RA (deg)
Dec (deg)
Period (d)
p-value
LSI +61 303
40.1
+61.2
26.5
0.51
Cygnus X-1
299.6
+35.2
5.6
0.63
Cygnus X-3
308.1
+40.9
0.2
0.09
XTE J1118+480
169.5
+48.0
0.2
0.11
GRS1915
288.8
+10.9
30.8
0.61
SS 433
287.9
+5.0
13.1
0.06
GRO 0422+32
65.4
+32.9
0.2
0.39
TABLE I
SYSTEM NAME, EQUATORIAL COORDINATES, PERIOD AND
PRE-TRIAL P-VALUE.
IV. MULTIWAVELENGTH FLARES ANALYSIS
In high-energy environments,
pγ
and
pp
interactions
produce pions and kaons that decay into photons and
neutrinos. Thus, we expect a correlation between TeV
γ
and
ν
µ
fluxes. Blazars and binary systems exhibit
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
variability, with flares often observed to correlate in
several photon wavebands. Hence, if TeV information is
not available, we can use X-ray and optical data as well.
We use this expected time correlation between photons
and neutrinos to suppress the background of atmospheric
neutrinos, which have a random distribution in time, by
looking for neutrino emission in time windows selected
based on MWL information. By restricting our search
we need fewer events to achieve a
5σ
signal than with
the time-integrated search.
We use MWL observations to create a catalogue of
flares from blazars and binary systems which have states
of heightened non-thermal emission. We determine the
time window of our search based on the MWL data to
characterize the time and duration of peak brightness.
A. Selection of Flares
To collect a list of interesting flares we monitored
alerts such as Astronomer’s Telegram or GCN for
sources observed undergoing a change of state which
may produce heightened neutrino emission. The selected
catalogue is presented in Tab. II and illustrated here:
•
3C 454.3
flares were measured by AGILE GRID
during July 24-30, 2007 [8] and again during Nov.
12-22, 2007 [9].
•
1ES 1959+650
was seen by INTEGRAL in a hard
flux state (Nov. 25-28 2007 [6]). Later Whipple
obtained a few measurements around December 2-7
[7] which we also selected for investigation.
•
Cygnus X-1
had a ”giant outburst” seen by Konus-
Wind and Suzaku-WAM [10]. These giant outbursts
have been modeled in [11].
•
S5 0716+71
was seen flaring in GeV, optical and
radio bands during two periods, September 7-13,
2007 and Oct 19-29, 2007 [12].
B. Method and Results
We tested two methods to search for neutrino flares:
the first case (hereafter the ”box method”), uses a pdf
which counts only events which fall inside the selected
time window:
T
i
=
H (t
max
t
i
)H (t
i
t
min
)
t
max
t
min
,
(4)
where H is the Heaviside step function, and
t
min
and
t
max
are fixed from MWL data. The second case is to
find a best-fit Gaussian to describe the neutrino emission,
fitting the mean of the flare and its duration inside the
selected time window. The time factor in the source term
will be:
T
i
=
√
1
2 πσ
t
e
|
ti t
0
|
2
2 σ
2
t
(5)
where
t
0
is the peak emission and
σ
t
is the width. The
Gaussian search method yields more information about
the flare, such as width and time of the peak of the
emission, and also can use events outside of the time
window. To focus the search on correlation with photon
emission instead of an all-year search, we confined the
Fig. 3.
Comparison of the box and Gaussian method for the flare
search. The mean number of events needed for a
5 σ
detection is
plotted against the width of neutrino emission.
mean to the time window, and the sigma can not be
longer than the time window. The Gaussian introduces
two additional parameters to fit, while the box method
has no additional parameters over the time-integrated
search.
To compare the two methods, we generated signal
events with Gaussian time distributions of different
widths to add to scrambled data. Our figure of merit
is the minimum flux required for 50% probability of
5
σ
discovery. We find the box method outperforms the
Gaussian unless the FWHM of the signal function is
less than 10% or greater than 110% of the width of the
time window. We show the discovery potential curves for
time windows of 3 and 10 days in Fig. 3. We also tested
the possibility that the time window we chose based on
MWL information is not centered on a neutrino flare
by injecting events with an offset in the window, still
finding a region where the box requires fewer events
for discovery. Hence the box method, which performs
better than the Gaussian method in a broad part of the
signal parameter space was selected for providing the
final p-values.
We found that 5 of 7 flares we examined were best
fit by 0 source events, while S5 0716+71 and 1ES
1959+650 each showed one contributing event during
a flare. Considering that we looked at 7 flares, the post
trials p-value is 14% for the most significant result, the
10 day flare of S5 0716+71. This value is compatible
with background fluctuations.
Source
Alert Ref.
Time Window
p-value
1ES 1959+650
[6]
MJD 54428-54433
1
1ES 1959+650
[7]
MJD 54435.5-54440.5
0.08
3C 454
[8]
MJD 54305-54311
1
3C 454
[9]
MJD 54416-54426
1
Cyg X-1
[10]
MJD 54319.5-54320.5
1
S5 0716+71
[12]
MJD 54350-54356
1
S5 0716+71
[12]
MJD 54392-54402
0.02
TABLE II
FLARE LIST: SOURCE NAME, REFERENCES FOR THE ALERT,
INTERVAL IN MODIFIED JULIAN DAY, PRE-TRIAL P-VALUE.
4
BAKER
et al.
ICECUBE TIME DEPENDENT POINT SOURCE ANALYSIS
V. FUTURE DEVELOPMENTS: ANALYSIS BASED ON
LONG LIGHT CURVES
With the advent of Fermi, long and regularly sampled
high energy
γ
-ray light curves will be available soon.
The Fermi public data [13] already provide a first
glimpse of the variable behavior of bright sources and
the quality of the data. We plan to analyze Fermi light
curves using the method described in [14]. Following
this approach, the analysis of long light curves will
provide:
•
A systematic selection of flaring periods: until now
the selection of flaring periods is biased because
detections are often triggered by alerts. The moni-
toring of the sky provided by Fermi will eliminate
this.
•
A systematic criterion to define the threshold for a
flare: once enough data will be accumulated, the
flare statistics will provide a characteristic level
and a standard deviation. With a safe 3
σ
threshold,
flaring periods cannot be confused with intrinsic
fluctuations of the detector and can be selected
uniformly across the entire period considered.
•
The possibility to select more than one flare in the
same light curve, to estimate the frequency of the
high states.
•
A non-parametric time dependent signal pdf.
Our analysis of long Fermi light curves is still in de-
velopment and for the moment limited by the relatively
short duration of the Fermi data taking. We illustrate the
method using the light curve collected by RXTE-ASM
for Mkn 421 (Fig. 4). About 10 years of RXTE-ASM
data are analyzed in order to extract a characteristic
level of the source and determine flaring periods, as in
[14]. For example here the threshold for flaring has been
fixed at the 3
σ
level that corresponds to 1.7 RXTE/ASM
count/sec. Interpreting periods selected above this level
with the Maximum Likelihood Block algorithm provides
the time dependent pdf (see Fig. 5).
While the rate of events observed in IceCube is
approximately stable over timescales of a few days, the
variability of the background has to be considered if
longer periods are tested. The main source of variations
of the observed event rates are changes in the detector
uptime. These will be implemented in the description of
the background.
VI. CONCLUSIONS
We have presented the results of a time dependent
analysis of the IceCube 22 string data sample. We
searched for a periodic time structure of neutrinos from
binary systems, and neutrinos in coincidence with high
Fig. 4. Subperiod of Mkn 421 light curve collected by ASM/RXTE
for illustration of the method.
Fig. 5. The time pdf resulting from application of the 3σ threshold
described in the text to the Mkn 421 light curve.
flux states from sources for which other experiments
issued alerts. Results in all cases were consistent with
background fluctuations. We also provide insight on how
MWL information may in the future be directly used to
create a time pdf to analyze correlations of photon and
neutrino emission.
REFERENCES
[1] J. Braun
et al., Astropart. Phys.
29
, 299 (2008)
[2] T. Neunho¨ffer, Astropart. Phys.
25
, 220 (2006)
[3] J. Dumm
et al., for the IceCube Collaboration, these proceedings.
[4] J. Albert
et al., 2009, ApJ
693
, 303
[5] F. Aharonian
et al., 2006 A&A
460
, 743
[6] E. Bottacini
et al., http://www.astronomerstelegram.org/?read=
1315
[7] VERITAS
collab.
http://veritas.sao.arizona.edu/documents/
summary1es1959.table
[8] S. Vercellone
et al., 2008, ApJ
676
, L13
[9] S. Vercellone
et al., 2009, ApJ
690
, 1018
[10] S. Golenetskii,
et al., GCN 6745, Aug 10 2007
[11] G.E. Romero, M.M. Kaufman Bernado and I.F. Mirabel, 2002,
A&A
393
, L61
[12] M. Villata
et al., 2008, A&A
481
, L79
[13] http://fermi.gsfc.nasa.gov/ssc/data/access/lat/msl
lc/
[14] E. Resconi
et al., arXiv:0904.1371.
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
Search for neutrino flares from point sources with IceCube
J. L. Bazo Alba
∗
, E. Bernardini
∗
, R. Lauer
∗
, for the IceCube Collaboration
†
∗
DESY, D-15738 Zeuthen, Germany.
†
see special section of these proceedings
Abstract. A time-dependent search for neutrino
flares from pre-defined directions in the whole sky
is presented. The analysis uses a time clustering
algorithm combined with an unbinned likelihood
method. This algorithm provides a search for sig-
nificant neutrino flares over time-scales that are
not fixed a-priori and that are not triggered by
multiwavelength observations. The event selection
is optimized to maximize the discovery potential,
taking into account different time-scales of source
activity and background rates. Results for the 22-
string IceCube data from a pre-defined list of bright
and variable astrophysical sources will be reported
at the conference.
Keywords: IceCube, Neutrino Flares, Clustering.
I. INTRODUCTION
Several astrophysical sources are known to have a
variable photon flux at different wavelengths, showing
flares that last between several minutes to several days.
Hadronic models of Active Galactic Nuclei (AGNs) pre-
dict [1][2] neutrino emission associated with these multi-
wavelength (MWL) emissions. Time integrated analyses
are less sensitive in this flaring scenario because they
contain a higher background of atmospheric neutrinos
and atmospheric muons. Therefore a time dependent
analysis is more sensitive because it reduces the back-
ground by searching smaller time scales around the flare.
A direct approach that looks for this correlation using
specific MWL observations is reported in [3].
In order to make the flare search more general, and
since MWL observations are scarce and not available for
all sources, we take an approach not triggered by MWL
observations. We apply a time-clustering algorithm
(see [4]) to pre-defined source directions looking
for the most significant accumulation in time (flare)
of neutrino events over background, considering all
possible combinations of event times. One disadvantage
of this analysis is the increased number of trials,
which reduces the significance. Nevertheless, for flares
sufficiently shorter than the total observation period,
the time clustering algorithm is more sensitive than a
time integrated analysis. The predicted time scales are
well below this threshold.
II. FLARE SEARCH ALGORITHM
The time clustering algorithm chooses the most
promising flare time windows based on the times of the
most signal-like events from the analyzed data. Each
combination of these event times defines a search time
window (∆t
i
). For each ∆t
i
a significance parameter
λ
i
is calculated. The algorithm returns the best λ
max
corresponding to the most significant cluster. The signif-
icance can be obtained using two approaches: a binned
method, as in the previous implementation [4], and
an improved unbinned maximum likelihood method [5]
which enhances the performance.
The unbinned maximum likelihood method defines
the significance parameter by:
λ = −2log
.
L(?x
s
, n
s
= 0)
L(?x
s
, nˆ
s
, γˆ
s
)
¸
,
(1)
where ?x
s
is the source location, nˆ
s
and γˆ
s
are the best
estimates of the number of signal events and source
spectral index, respectively, which are found by max-
imizing the likelihood, (L):
L =
n
Y
tot
i=1
μ
n
s
n
tot
S
i
+
μ
1 −
n
s
n
tot
¶
B
i
¶
(2)
The background probability density function (pdf),
B
i
, calculated purely from data distributions, is given
by:
B
i
= P
space
i
(θ
i
, φ
i
)P
energy
i
(E
i
, θ
i
)P
time
i
(θ
i
),
(3)
where P
space
describes the distribution of events in a
given area (a zenith band of 8
◦
is used for convenience).
In a simple case this probability would be flat because of
random distribution of background events. However, due
to applied cuts, Earth absorption properties and detector
geometry, this probability is dependent on zenith, θ
i
, and
azimuth, φ
i
. The irregular azimuthal distribution caused
by the detector geometry is shown in Fig. 1. For time
integrated analyses covering one year the dependence
on the azimuth is negligible because the exposure for
all right ascension directions is integrated. However, an
azimuth correction becomes important for time scales
shorter than 1 day, reaching up to 40% difference, thus
it should be included in time dependent analyses. P
space
has value unity when integrated over solid angle inside
the test region (i.e. zenith band).
The energy probability P
energy
i
is determined from
the energy estimator distribution and depends on the
zenith coordinate. In the southern sky an energy sensitive
event selection is the most efficient way to reduce the at-
mospheric muon background. This energy cut decreases
with zenith angle, thus creating a zenith dependence
2
J. L. BAZO ALBA et al. NEUTRINO FLARES WITH ICECUBE
of the energy. Therefore a zenith dependent energy
probability, shown in Fig. 2, is needed. Note that for
the northern sky this correction is small.
0
]
Azimuth[
0
50
100
150
200
250
300
350
Normalized Events
0.2
0.4
0.6
0.8
1
Fig. 1. Normalized azimuth distribution of the data sample reported
in [9].
cos(
θ
)
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
(E)
10
log
0
1
2
3
4
5
6
7
8
9
10
-3
10
10
-2
10
-1
1
Fig. 2. Background energy pdf from data as a function of the energy
estimator and zenith angle. The ultra high energy sample [10] is used.
The southern sky corresponds to cos(θ) > 0.
Given the low statistics at final sample level, estimat-
ing the background by counting events inside a time
window would introduce significant errors for short time
scales. Therefore another approach is used, namely, to
fit the event rates in the entire observed period as a
function of time. Two regions of the sky (South and
North) are distinguished because they have different
properties. The northern sky sample consists mostly of
atmospheric neutrinos which do not show a significant
seasonal variation, therefore a constant fit is used. For
the southern sky, a sinusoidal fit is used because it is
dominated by a background of high energy atmospheric
muons which have seasonal variation. These fits are
shown in Fig. 3 and include the necessary correction
for the uptime
1
of the detector. It has been verified that
the time modulations for different zenith bands within a
1
The uptime takes into account the inefficiency periods and data
gaps after data quality selection.
half hemisphere are the same, thus allowing us to use
all events inside the half hemisphere for the fit of the
rates.
Time MJD
54300 54350 54400 54450 54500 54550
Rate (Hz)
20
25
30
35
40
45
50
-6
×
10
Time MJD
54300 54350 54400 54450 54500 54550
Rate (Hz)
0.16
0.165
0.17
0.175
0.18
0.185
0.19
0.195
-3
×
10
Fig. 3. Uptime corrected rates and their fits for the southern (left)
and northern (right) skies.
The signal pdf, S
i
, is given by:
S
i
= P
space
i
(| ?x
i
− ?x
s
|, σ
i
)P
energy
i
(E
i
, θ
i
, γ
s
), (4)
where, the spatial probability, P
space
i
is a Gaussian func-
tion of | ?x
i
− ?x
s
|, the space angular difference between
the source location, ?x
s
, and each event’s reconstructed
direction, ?x
i
, and σ
i
, the angular error estimation of
the reconstructed track. The estimator used for σ
i
is
the size of the error ellipse around the maximum value
of the reconstructed event track likelihood. The energy
probability, P
energy
i
, constructed from signal simulation,
is a function of the event energy estimation, E
i
, the
zenith coordinate, θ
i
, and the assumed energy spectral
index of the source, γ
s
(E
−γ
s
). A projection of P
energy
i
for the whole sky is shown in Fig. 4. For a given θ
i
and γ
s
the energy pdf is normalized to unity over E
i
.
For the energy a dedicated estimator of the number of
photons per track length is used. No flare time structure
is assumed (i.e. taken to be flat in time). Therefore there
is no need to include a time dependent term in the signal
pdf.
(E)
10
log
0 12 34 5 6 7 8 910
Energy spectral index
1
1.5
2
2.5
3
3.5
4
10
-4
-3
10
10
-2
10
-1
Fig. 4. Projection for the whole sky of the energy component of the
signal pdf as a function of the energy estimator and energy spectral
index. The ultra high energy sample [10] is used.
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
TABLE I
LIST OF VARIABLE ASTROPHYSICAL SOURCES AND THEIR DETECTION PROBABILITY (TIME VARIABLE AND TIME INTEGRATED) FOR A
SIMULATED FLARE OF 7 DAYS WITH AN E
−2
ENERGY SPECTRUM AND POISSON MEAN OF 5 INJECTED EVENTS. THE EQUIVALENT STEADY
FLUX CORRESPONDS TO 5 EVENTS INJECTED AT ANY TIME IN THE FULL ICECUBE DATA-TAKING PERIOD (276 DAYS).
Source
Type
dec [
◦
]
ra [
◦
]
Detection Probability (3σ) (%)
Eq. flux ×10
−11
Time Variable
Time Integrated
TeVcm
−2s−1
GEV J0540-4359
LBL
-44.1
84.7
46.8
24.5
57.5
GEV J1626-2502
FSRQ
-25.5
246.4
85.6
80.8
30.9
GEV J1832-2128
FSRQ
-21.1
278.4
77.1
72.1
21.2
GEV J2024-0812
FSRQ
-7.6
306.4
37.5
14.4
3.0
3C 279
FSRQ
-5.8
194.1
26.1
9.8
2.4
3C 273
FSRQ
2.0
187.3
50
12.4
1.2
CTA 102
FSRQ
11.7
338.1
36.2
13.6
1.0
GEV J0530+1340
FSRQ
13.5
82.7
31.4
10.1
1.1
3C 454.3
FSRQ
16.1
343.5
70.1
12.2
1.2
GEV J0237+1648
LBL
16.6
39.7
69
11
1.2
We use a binned method implementation of the time
clustering algorithm as a crosscheck of our new un-
binned analysis. In the case of the binned method, a
circular angular search bin (2.5
◦
radius) around the
source direction is used. The times of the events that
define the search time windows (∆t
i
) are given by all the
events inside this angular bin. The significance parame-
ter is obtained from Poisson statistics, given the number
of expected background events inside the bin and the
observed events in each cluster with multiplicity
2
m.
The expected number of background events is calculated
by integrating, in the given time window, the fit to the
rates, as described above. This calculation takes into
account the zenith dependence of the background, in
zenith bands with the size of the bin, the corresponding
uptime factor and the azimuth correction.
The best significance obtained for a cluster is cor-
rected for trial factors by running several Monte Carlo
background-only simulations. The simulation is done
by creating distributions from data of zenith, azimuth,
reconstruction error and energy estimator. The event
characteristics are randomly taken from these distri-
butions while considering the correlations between the
different parameters. In order to study the performance
of the algorithm, we calculate the neutrino flare detection
probability as a function of the signal strength and
duration of the flare by simulating signal events on top
of background events
3
. The properties of signal events
are taken from a dedicated signal simulation and depend
on the assumed energy spectral index. The Point Spread
Function (PSF) is used to smear the events around
the source location, thus simulating the effect of the
direction reconstruction. For each simulation, a random
time is chosen around which signal events are randomly
injected inside the time window defined by the flare
duration. The flare duration is investigated in the range
from 1 day to 15 days, though the algorithm finds the
best time window, which could be larger. We constrain
2
The integral of the Poisson distribution of the background events
starts at (m-1) since the beginning and end of the time period are fixed
from the data itself.
3
The number of injected background and signal events is Poisson
distributed.
the largest flare duration in the algorithm to be less than
30 days, which is sensible from γ-ray observations.
III. SOURCE SELECTION
Since searching for all directions in the sky would de-
crease the significance, we consider only a few promis-
ing sources, thus reducing the number of trials. We
select variable bright astrophysical sources in the whole
sky. The selected blazars, including Flat Spectrum Radio
Quasars (FSRQs) and Low-frequency peaked BL Lacs
(LBLs), are taken from the confirmed Active Galactic
Nuclei (AGN) in the third EGRET catalogue (3EG) [6].
We also require that they are present in the current latest
Fermi catalogue (0FGL) [7]. The criteria for selecting
variable and bright source is based on the following
parameters thresholds:
• Variability index (3EG) > 1
• Maximum 3EG flux (E > 100 MeV) > 40 [10
−8
ph cm
−2
s
−1
]
• Average 3EG flux (E > 100 MeV) > 15 [10
−8
ph
cm
−2
s
−1
]
• Inside visibility region of IceCube.
The selected source list consists of 10 directions
(Table I) that are going to be tested with the time
clustering algorithm. Models like [2] favor fluxes of
higher energy neutrinos from FSRQ sources. Given the
absorption of neutrinos at different energies in the Earth
and the event cut strategy, southern sky FSRQs are more
favored by these models because of their higher energy
range of sensitivity.
IV. DATA SAMPLES
IceCube[8] 22-string data from 2007-08 is used. It
spans 310 days with an overall effective detector uptime
of 88.9% (i.e. 276 days). The whole sky (declination
range from -50
◦
to 85
◦
) is scanned. Different selection
criteria are applied for the northern and southern skies.
Previously obtained reconstructed datasets are used: the
standard point source sample for the northern sky [9]
(5114 events, declination from -5
◦
to 85
◦
, 1.4
◦
sky-
averaged median angular resolution) and the dedicated
ultra high energy sample for the southern sky [10] (1877
events in the whole sky, declination from -50
◦
to 85
◦
,
4
J. L. BAZO ALBA et al. NEUTRINO FLARES WITH ICECUBE
-2
s
-1
)
-11
TeV cm
Equivalent steady flux (x10
1
2
3
4
5
6
] (%)
σ
Detection Probability [3
0
10
20
30
40
50
60
70
80
90
100
1 day flare
7 day flare
15 day flare
T integrated
(a) Southern sky at (dec=-7.6, ra=306.4)
-2
s
-1
)
-11
TeV cm
Equivalent steady flux (x10
0.5
1
1.5
2
2.5
] (%)
σ
Detection Probability [3
0
10
20
30
40
50
60
70
80
90
100
1 day flare
7 day flare
15 day flare
T integrated
(b) Northern sky at (dec=16.1, ra=343.5)
Fig. 5. Detection probability (3σ) for two source directions. The curves correspond to different time duration of the flares as function of the
injected flux with a E
−2
energy spectrum, using an unbinned time variable method (dashed), compared to a time integrated method (solid).
The same mean number of events are injected into the time-windows (1, 7, 15, and 276 days) at each point on the x-axis, which is labeled
with the equivalent flux corresponding to the full 276 day period.
1.3
◦
sky-averaged median angular resolution). The first
sample is optimized, within an unbinned method, for
the optimal sensitivity to both hard and soft spectrum
sources. The second sample was optimized for a binned
method at ultra high energies. Therefore it should be
noted that the binned method results are much better in
the southern sky than in the northern sky. Nevertheless,
the unbinned method, for an E
−2
energy spectrum still
performs better in the southern sky.
The energy containment in these two regions is dif-
ferent, with ranges from TeV to PeV and from PeV
to EeV, in the northern and southern sky respectively.
Event tracks are obtained with a multi-photoelectron
4
(MPE) [11] reconstruction which improves the angular
resolution for high energies.
V. RESULTS
The probability of a 3σ flare detection using this time
variable analysis (time clustering algorithm) for a given
number of injected signal events (i.e. Poisson mean of
5 events) with a E
−2
energy spectrum inside a seven-
day window is shown for all sources in Table I. For
comparison purposes, time-integrated detection proba-
bilities integrated over the whole 22-string IceCube data
period (276 days) are also given. In the northern sky,
the same simulated signal was on average four times
more likely to be detected at 3σ with the unbinned time
variable search than with the time integrated search, and
in the southern sky, on average about twice as likely with
the time variable search. The gain is not as substantial
as in the northern sky because the discovery potential
without time properties is already greater since for the
same number of injected signal events the background
is relatively smaller. A more detailed example for two
sources, at the southern and northern skies, for different
time scales and signal fluxes is presented in Fig. 5. For
4
The MPE reconstruction takes the arrival time distribution of the
first of N photons using the cumulative distribution of the single photon
pdf.
shorter flare durations the detection probability increases
and is well above a time integrated search. It can be seen
that there is a different behaviour for each part of the sky.
This is caused by the different type of backgrounds (high
energy atmospheric muons in the south and atmospheric
neutrinos in the north) and the difference in number of
final events in each sample (less events in the southern
sky) due to the different selection cuts.
VI. SUMMARY
We have presented the sensitivity of the time cluster-
ing algorithm using an unbinned maximum likelihood
method. This is an improvement over the previous
performances using a binned method and time integrated
analyses. The search window for variable sources has
been extended to the southern sky. IceCube 22-string
data will be analyzed using this method looking for
neutrinos flares with no a priori assumption on the time
structure of the signal.
REFERENCES
[1] D. F. Torres and F. Halzen, Astropart. Phys. 27 (2007) 500.
[2] A. Atoyan and C. D. Dermer, New Astron. Rev. 48 (2004) 381.
[3] M. Baker et al. for the IceCube Collab.: Contributions to this
conference.
[4] K. Satalecka et al. for the IceCube Collab.: Contributions to the
ICRC, Merida, Mexico, July 3-11, pages 115-118, (2007).
[5] J. Braun et al., Astropart. Phys. 29 (2008) 299.
[6] P. L. Nolan et al., The Astrophysical Journal. Vol. 597, No 1,
(2003) pp. 615-627.
[7] A. A. Abdo et al., Fermi LAT Collaboration, (2009).
arXiv:0902.1340
[8] A. Karle et al., for the IceCube Collab.: ARENA Proceedings
(2008).
[9] J. L. Bazo Alba et al. for the IceCube Collab.: NOW 2008
Proceedings. Nucl. Phys. B (Proc. Suppl.) Vol. 188 (2009) pp.
267-269. arXiv:0811.4110
[10] R. Lauer et al. for the IceCube Collab.: Proceedings 2nd
Heidelberg Workshop ”HE γ-rays and ν’s from Extra-Galactic
Sources”, (2009). arXiv:0903.5434
[11] J. Ahrens et al. for the AMANDA Collab., Nucl. Instrum. Meth.
A 524 (2004) 169.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Neutrino triggered high-energy gamma-ray follow-up with IceCube
Robert Franke
∗
, Elisa Bernardini
∗
for the IceCube collaboration
†
∗
DESY Zeuthen,D-15738 Zeuthen, Germany
†
See the special section of these proceedings.
Abstract
. We present the status of a program
for the generation of online alerts issued by Ice-
Cube for gamma-ray follow up observations by Air
Cherenkov telescopes (e.g. MAGIC). To overcome the
low probability of simultaneous observations of flares
of objects with gamma-ray and neutrino telescopes
a neutrino-triggered follow-up scheme is developed.
This mode of operation aims at increasing the avail-
ability of simultaneous multi-messenger data which
can increase the discovery potential and constrain the
phenomenological interpretation of the high energy
emission of selected source classes (e.g. blazars).
This requires a fast and stable online analysis of
potential neutrino signals. We present the work
on a significance-based alert scheme for a list of
phenomenologically selected sources. To minimize the
rate of false alerts due to detector instabilities a fast
online monitoring scheme based on IceCube trigger
and filter rates was implemented.
Keywords
: IceCube neutrino gamma-ray follow-up
I. INTRODUCTION
A Neutrino Triggered Target of Opportunity pro-
gram (NToO) was developed already in 2006 using the
AMANDA array to initiate quasi-simultaneous gamma-
ray follow-up observations by MAGIC. The aim of such
an approach is to increase the chance to discover cosmic
neutrinos by on-line searches for correlations with estab-
lished signals (e.g. flares in high-energy gamma-rays)
triggered by neutrino observations. For sources which
manifest large time variations in the emitted radiation,
the signal-to-noise ratio can be increased by limiting
the neutrino exposures to most favorable periods. The
chance of discovery can then be enhanced (the so
called ”multi-messenger approach”) by ensuring a good
coverage of simultaneous data at a monitoring waveband
(e.g. gamma-rays). The first realization of such an ap-
proach led to two months of follow-up observations of
AMANDA triggers by MAGIC, focused on a selected
sample of Blazars as target sources [1]. An extension
of this program to IceCube and also to optical follow-
up observations has been later realized with the ROTSE
network of optical telescopes, addressing possible cor-
relations between neutrino multiplets and either GRBs
or Supernovae [2].
Multi-messenger studies can be accomplished off-
line, searching for correlations between the measured
intensity curves in the electromagnetic spectrum and the
time of the detected neutrinos. The major limitations
encountered so far were due to the scarce availability
of information on the electromagnetic emission of the
objects of interest, which typically are not observed
continuously. Whenever data is available, such an a-
posteriori approach is however very powerful, and it is
part of the research plans of the IceCube Collaboration.
We emphasize that a neutrino telescope at the South
Pole is continuously and simultaneously sensitive to
all objects located in the northern hemisphere. The
investigation of the correlation between the observed
properties of the electromagnetic emission and the de-
tected neutrinos is therefore at any time feasible once
the relevant electro-magnetic information is available.
In other words, on-line and off-line approaches have to
be seen as complementary and not mutually exclusive.
In case of variable objects like Blazars, FSRQs as
well as Galactic systems like microquasars and magne-
tars, hadronic models describing the very high energy
gamma-rays emission also predict simultaneous high
energy neutrinos. Absorption processes might attenuate
the gamma-ray luminosity when the objects are brightest
in neutrinos, so that an anti-correlation or time-lag might
be predicted as well. In all cases, the availability of
simultaneous data on high energy gamma-ray emission
and (possibly) neutrinos is mandatory to test different
scenarios and shed light on the emission mechanisms
(e.g. extract information on the optical depth and on
other astrophysical source parameters).
II. SELECTION OF TARGET SOURCES
The most interesting objects as a target for gamma-ray
follow-up observations of IceCube events are promising
sources of TeV neutrinos, which are either known to
exhibit a bright GeV flux in gamma-rays and show
extrapolated fluxes detectable by Imaging Air Cherenkov
Telescopes, or are already detected by IACTs and
are variable. Candidates currently being considered are
AGNs (HBL, LBL, FSRQs), Microquasars and Magne-
tars (SGRs). A preliminary source list based on observa-
tions with the FERMI [6] and EGRET [3] experiments
is based on the following criteria:
•
Source is present in both the third EGRET(3EG)
and Fermi catalogues;
•
Source is classified as variable in the Fermi cata-
logue;
•
Variability Index
> 1
in the 3EG catalog (taken
from [5]);
•
Maximum 3EG flux
> 40 · 10
8
ph cm
2
s
1
, E >
100
MeV;
2
R. FRANKE
et al.
NTOO WITH ICECUBE
cos(
θ
)
-1
-0.9
-0.8
-0.7
-0.6
-0.5
-0.4
-0.3
-0.2
-0.1
0
Rate [Hz]
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.22
-3
×
10
Fig. 1. Predicted rate of atmospheric neutrinos based on Monte-Carlo
for IceCube in its 2009/2010 configuration with
59
deployed strings.
•
Average 3EG flux
> 15 · 10
8
phcm
2
s
1
, E >
100
MeV;
•
Difference between the maximum 3EG flux and the
minimum 3EG flux
> 30 · 10
8
ph cm
2
s
1
, E >
100
MeV.
The sources that were selected according to these criteria
can be found in Table I.
III. EVENT SELECTION
The basis for the event selection is an on-line filter
that searches for up-going muon tracks. The rate of this
filter is about
24
Hz for IceCube in its 2009/2010 con-
figuration with
59
deployed strings. As the computing
resources at the South Pole are limited one can not
run more elaborate reconstructions at this rate, so a
further event selection has to be done. This so called
Level-2 filter searches events that were reconstructed
with a zenith angle
θ > 80
◦
(
θ = 0
◦
equals vertically
down-going tracks) with a likelihood reconstruction. By
requiring a good reconstruction quality the background
of misreconstructed atmospheric muons is further re-
duced. The parameters used to assess the track quality
are the likelihood of the track reconstruction and the
number of unscattered photons with a small time residual
w.r.t. the Cherenkov cone. The reduced event rate of
approximately
2.9
Hz can then be reconstructed with
more time intensive reconstructions, like a likelihood
fit seeded with ten different tracks (iterative fit). The fit
with the best likelihood is used for further cuts. Based on
this reconstruction the final event sample is selected by
employing a zenith angle cut of
θ > 90
◦
for the iterative
reconstruction and further event quality cuts based on
this reconstruction. In addition to the already mentioned
parameters we also employ a cut on the longest distance
between hits with a small time residual compared to
their expected arrival time calculated from the track
geometry when projected on the reconstructed track. The
resulting rate of atmospheric neutrinos as predicted by
Monte Carlo as a function of zenith angle can be seen
in Figure 1.
IV. THE TIME-CLUSTERING ALGORITHM
The timescale of a neutrino flare is not fixed a-priori
and thus a simple rolling time window approach is not
adequate to detect flares. The time clustering approach
that was developed for an unbiased neutrino flare search
[7] looks for any time frame with a significant deviation
of the number of detected neutrinos from the expected
background. The simplest implementation uses a binned
approach where neutrino candidates within a fixed bin
around a source are regarded as possible signal events.
To exploit the information that can be extracted from
the estimated reconstruction error and other event prop-
erties like the energy an unbinned maximum-likelihood
method is under development.
If a neutrino candidate is detected at time
t
i
around a
source candidate the expected background
N
i,j
bck
is calcu-
lated for all other neutrino candidates
j
with
t
j
< t
i
from
that source candidate. To calculate
N
i,j
bck
the detector
efficiency as a function of the azimuth angle and the
uptime has to be taken into account. The probability to
observe the multiplet
(i, j)
by chance is then calculated
according to
?
∞
k=N
i,j
obs
1
(N
i,j
bck
)
k
k!
e
N
i,j
bck
(1)
where
N
obs
is the number of detected on-source neutrinos
between
t
j
and
t
i
. It has to be reduced by
1
to take
into account the bias that one only does this calculation
when a signal candidate is detected. As typical flares
in high energy gamma-rays have a maximal duration of
several days we constrain our search for time clusters of
neutrinos to three weeks.
If the cluster with the highest significance exceeds a
certain threshold (e.g. corresponding to
5 σ
) the detector
stability will be checked and an alert will be send to an
Cherenkov telescope to initiate a follow-up observation.
V. DATA QUALITY
Data quality is very important for any online alert
program to minimize the rate of false alerts due to
detector or DAQ instabilities. IceCube has a very ex-
tensive monitoring of the DAQ and South Pole on-line
processing. However, most of the information is only
available with a certain delay after data-taking and thus
not useful for follow-up program which requires fast
alerts. To ensure that alerts are only sent for neutrino
multiplets that where detected during stable running
conditions a simple but powerful stability monitoring
scheme has been developed. It is based on a continuous
measurement of the relevant trigger and filter rates and
their respective ratios in time bins of
10
minutes. These
values are then compared to a running average of these
rates over approximately four days to detect significant
deviations. The running average is necessary as slow
seasonal changes in the atmosphere and faster weather
changes influence the rate of atmospheric muons which
dominate the Level-2 rate. An example of this behaviour
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
TABLE I
PRELIMINARY CANDIDATE SOURCE LIST FOR NEUTRINO TRIGGERED FOLLOW-UP OBSERVATIONS. THE TYPE OF AGN HAS BEEN TAKEN
FROM [4]
Source name
Blazar type
Dec.
[
◦
]
RA
[
◦
]
max. 3EG flux
[10
8
cm
2
s
1
]
min. 3EG flux
[10
8
cm
2
s
1
]
avg. 3EG flux
[10
8
cm
2
s
1
]
3C 273
FSRQ
2.0
187.3
48.3
8.5
15.4
CTA 102
FSRQ
11.7
338.1
51.6
12.1
19.2
GEV J0530+1340
FSRQ
13.5
82.7
351.4
32.4
93.5
3C 454.3
FSRQ
16.1
343.5
116.1
24.6
53.7
GEV J0237+1648
LBL
16.6
39.7
65.1
11.6
25.9
15/12
20/12
25/12
30/12
05/01
10/01
15/01
20/01
25/01
30/01
05/02
10/02
15/02
20/02
25/02
05/03
10/03
15/03
20/03
25/03
30/03
05/04
10/04
15/04
20/04
Date
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
Rate [Hz]
Fig. 2. Rate of the Level-2 online filter for four months (December
2008 till April 2009) in 10-minute bins with IceCube in its 2008/2009
configuration with 40 deployed strings. The Level-2 filter is an online
filter that is used for different follow-up observation programs. The
slow change in the rate is due to seasonal variations in the atmospheric
muon background rate caused by pressure changes in the atmosphere.
can be seen in Figures 2 and 3. This system was
tested off-line on data from IceCube in its 40-string
configuration and proved to correlate very well with the
extensive off-line detector monitoring. The fraction of
data that has to be discarded due to detector or software
problems was about
6 %
, which includes all periods
in Figures 2 and 3 that significantly deviate from the
average. This method will be implemented online for
IceCube in its 2009/2010 configuration with
59
deployed
strings.
VI. SIGNIFICANCE CALCULATION
Under the hypothesis that all the neutrinos are of
atmospheric origin, the probability of observing at least
N
obs
multiplets above the significance threshold and
detecting at least
N
coinc
coincident gamma-ray flares is
given by:
?
+∞
m=N
obs
(N
bck
)
m
m!
e
N
bck
?
m
j=N
coinc
m!
j!(m j)!
(p
gam
)
j
(1 p
gam
)
m j
(2)
where the first term describes the Poisson probability
of observing at least
N
obs
neutrino multiplets with
N
bck
background expected, and the second term describes the
probability of observing at least
N
coinc
out of
m
– the
running number of observed multiplets, larger or equal
0123456
Level 2 online filter rate [Hz]
0
500
1000
1500
2000
2500
Number of 10 min bins
Fig. 3. Histogram of the rates of the online Level-2 filter of IceCube
in its 2008/2009 configuration with 40 deployed strings for the four
months shown in figure 2. The bin at a value of
5.5
Hz contains all
entries bigger than
5.5
Hz.
to
N
obs
– each with a probability of
p
gam
. We note that this
probability can be calculated anytime a-posteriori, once
a realistic knowledge of the probability
p
gam
to detect
a gamma-ray flare in a time window
∆t
is available.
In order to avoid statistical biases it is mandatory,
however, that the statistical test is defined a-priori, i.e.
that the conditions to accept an observation and defining
a coincidence are previously fixed. Methods on how the
to reliably estimate the probability
p
gam
of detecting a
gamma-ray flare in a time window
∆t
, which is influ-
enced by the source elevation and weather conditions,
from the frequency of the observed gamma-ray flares are
under development. The significance calculated above
also does not account for the trial factor correction due
to the selection of three or more objects, which can
however be calculated as the product of the individual
terms corresponding to each source. The probability of
having at least one coincidence in any of the proposed
sources is, for example:
P =1
N
?
Sources
i=1
P
0
i
(3)
where
P
0
i
is the probability of having zero coincidences
at the source
i
.
VII. THE GAMMA-RAY FOLLOW-UP OBSERVATION
SCHEME
We propose an observation scheme as follows:
4
R. FRANKE
et al.
NTOO WITH ICECUBE
0
10
20
30
40
50
60
70
80
Declination [degree]
10
-2
10
-1
10
0
10
1
Alert rate [1/year]
3 sigma
5 sigma
Fig. 4. Preliminiary alert rate from atmospheric neutrino background
for IceCube in its 2009/2010 configuration with
59
deployed strings
for an alert threshold for the multiplet significance corresponding to
3σ
(upper points) and
5σ
(lower points) and a bin size of
2
◦
.
•
Up to 1 day after receiving an IceCube alert from
one of the pre-defined directions, the source is
scheduled to be observed as soon as visible and
observation conditions allow.
•
If the gamma-ray observation is possible, it will
continue for one hour.
•
The results of the on-line analysis will be checked
and, if there is a positive hint (above 3
σ
) the
gamma-ray observations may be extended. In case
of a positive observation (i.e. a gamma-ray flux
trespassing the pre-defined threshold defining a
flare), the opportunity to trigger multi-wavelength
observations should then be considered. Due to
the irreducible background of atmospheric neutri-
nos (Figure 1) one can estimate the alert rate for
different zenith regions (Figure 4) for thresholds
corresponding to
3σ
and
5σ
. The on-source bin has
been preliminarily chosen to have a radius of
2
◦
.
Based on a simple Monte Carlo simulation that does not
take into account detector features like the azimuth de-
pendent efficiency we calculated the discovery probabil-
ities for different numbers of injected on-source events
(see Figure 5) at a declination of
26
◦
. The discovery
probability is defined here as the probability to detect a
5 σ
deviation with the time clustering method.
VIII. STATUS
The event selection and software to calculate the
significance of a neutrino cluster are implemented and
ready to be deployed at the South Pole. As IceCube in
its 2009/2010 configuration with
59
deployed strings is
considerably bigger than the previous detector configu-
ration the stability monitoring needs to be checked with
the first weeks of physics data. Pending the approval
of the follow-up program by a Cherenkov telescope
collaboration we then aim for a timely implementation
of this program.
1234567
Number of injected signal neutrino events
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Probability to trigger on neutrino flare
1 day
2 days
4 days
8 days
Fig. 5.
Preliminary discovery probability for a certain number of
injected on-source neutrino events for different flare durations for a
source at a declination of
26
◦
. The discovery probability is defined
here as the probability to detect a
5 σ
deviation with the time-clustering
method. This does not contain the probability of the gamma-ray
observation.
IX. OUTLOOK
Besides enhancing the chance to discover point
sources of neutrinos, the gamma-ray follow-up approach
here discussed can increase the chance of detecting
unusual gamma-ray emission of the selected objects.
It also can provide an important contribution to the
understanding of the flaring behavior of a few emitters of
high energy gamma-rays in a way complementary to X-
ray observations. Most relevant, it can provide a series of
coincidences and therefore represent an important input
to dedicated multi-wavelength follow-up observations,
which will assess in more details the phenomenology
of the potential sources. In fact – thanks to the existing
communication infrastructures of multi-wavelength cam-
paigns – the observation of gamma-ray flares can start
a monitoring of the objects at other wavelengths (e.g.
X-ray) that would further complement the information
that are discussed here.
REFERENCES
[1] M. Ackermann
et al.
, Proc. 29th ICRC, arXiv:astro-ph/0509330.
[2] Kowalski, M. and Mohr, A., Astropart. Phys., Vol. 27, (2007)
pp. 533-538, arXiv:astro-ph/0701618.
[3] Hartman
et al.
, The Astrophysical Journal Supplement Series,
Vol. 123, No. 1. (1999), pp. 79-202.
[4] Nandikotkur G., Jahoda K.M., Hartman R.C., 2007, ApJ, 657,
706.
[5] Nolan P.L.
et al.
, The Astrophysical Journal. Vol 597, No 1,
(2003) pp. 615-627.
[6] Fermi Collab. Submitted to Astrophysical Journal Supplement
(2009).
[7] K. Satelecka
et al.
for the IceCube collaboration, Proc. 30th
ICRC, pp. 115-118.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Moon Shadow Observation by IceCube
D.J. Boersma
∗
, L. Gladstone
†
and A. Karle
†
for the IceCube Collaboration
‡
∗
RWTH Aachen University, Germany
†
Department of Physics, University of Wisconsin, Madison, WI 53706, USA
‡
see special section of these proceedings
Abstract
. In the absence of an astrophysical stan-
dard candle, IceCube can study the deficit of cosmic
rays from the direction of the Moon. The observation
of this “Moon shadow” in the downgoing muon flux is
an experimental verification of the absolute pointing
accuracy and the angular resolution of the detector
with respect to energetic muons passing through.
The Moon shadow has been observed in the 40-
string configuration of IceCube. This is the first stage
of IceCube in which a Moon shadow analysis has
been successful. Method, results, and some systematic
error studies will be discussed.
Keywords
: IceCube, Moon shadow, pointing capa-
bility
I. INTRODUCTION
IceCube is a kilometer-cube scale Cherenkov detector
at the geographical South Pole, designed to search
for muons from high energy neutrino interactions. The
arrival directions and energy information of these muons
can be used to search for point sources of astrophysical
neutrinos, one of the primary goals of IceCube.
The main component of IceCube is an array of optical
sensors deployed in the glacial ice at depths between
1450 m and 2450 m. These Digital Optical Modules
(DOMs), each containing a 25 cm diameter photo-
multiplier tube with accompanying electronics within
a pressure housing, are lowered into the ice along
“strings.” There are currently 59 strings deployed of 86
planned; the data analyzed here were taken in a 40 string
configuration, which was in operation between April
2008 and April 2009. There are 13 lunar months of data
within that time. In this analysis we present results from
8 lunar months of the 40 string configuration.
For a muon with energy on the order of a TeV,
IceCube can reconstruct an arrival direction with or-
der
1
◦
accuracy. For down-going directions, the vast
majority of the detected muons do not originate from
neutrino interactions, but from high energy cosmic ray
interactions in the atmosphere. These cosmic ray muons
are the dominant background in the search for astro-
physical neutrinos. They can also be used to study the
performance of our detector. In particular, we can verify
the pointing capability by studying the shadow of the
Moon in cosmic ray muons.
As the Earth travels through the interstellar medium,
the Moon blocks some cosmic rays from reaching the
Earth. Thus, when other cosmic rays shower in the
Earth’s atmosphere and create muons, there is a rela-
tive deficit of muons from the direction of the Moon.
IceCube detects these muons, not the primary cosmic
rays. Since the position and size of the Moon is so well
known, the resulting deficit can be used for detector cal-
ibration. The idea of a Moon shadow was first proposed
in 1957 [1], and has become an established observation
for a number of astroparticle physics experiments; some
examples are given in references [2], [3], [4], [5]. Exper-
iments have used the Moon shadow to calibrate detector
angular resolution and pointing accuracy [6]. They have
also observed the shift of the Moon shadow due to the
Earth’s magnetic field [7]. The analysis described here
is optimized for a first observation, and does not yet
include detailed studies such as describing the shape of
the observed deficit. These will be addressed in future
studies.
II. METHOD
A. Data and online event selection
Data transfer from the South Pole is limited by the
bandwidth of two satellites; thus, not all downgoing
muon events can be immediately transmitted. This anal-
ysis uses a dedicated online event selection, choosing
events with a minimum quality and a reconstructed
direction within a window of acceptance around the
direction of the Moon. The reconstruction used for the
online event selection is a single (i.e., not iterated) log-
likelihood fit.
The online event selection is defined as follows, where
δ
denotes the Moon declination:
•
The Moon must be at least
15
◦
above the horizon.
•
At least 12 DOMs must register each event.
•
At least 3 strings must contain hit DOMs.
•
The reconstructed direction must be within 10
◦
of
the Moon in declination.
•
The reconstructed direction must be within
40
◦
/ cos(δ)
of the Moon in right ascension; the
cos(δ)
factor corrects for projection effects.
These events are then sent via satellite to the northern
hemisphere for further processing, including running the
higher-quality 32-iteration log-likelihood reconstruction
used in further analysis.
The Moon reached a maximum altitude of
27
◦
above
the horizon (
δ = 27
◦
) in 2008, when viewed from
2
D.J. BOERSMA
et al.
ICECUBE MOON SHADOW
log10(E/GeV)
3
4
5
6
7
8
rate [Hz]
-3
10
10
-2
10
-1
1
10
10
2
Fig. 1. The energy spectrum of (simulated) CR primaries of muons
(or muon bundles) triggering IceCube. Red: all events; blue: primaries
with
δ > 30
◦
.
time since 3 September 2008 [days]
0
10
20
30
40
50
60
event rate [Hz]
0
5
10
15
20
25
]
o
-(Moon declination)[
0
5
10
15
20
25
Fig. 2.
The rate of events passing the Moon filter (in Hz, lower
curve) averaged hourly, together with the position of the Moon above
the horizon at the South Pole (in degrees, upper curve), plotted versus
time over 3 typical months.
the IceCube detector. The trigger rate from cosmic ray
muons is more than 1.2 kHz in the 40 string configu-
ration, but most of those muons travel nearly vertically,
and thus they cannot have come from directions near the
Moon. Only
∼ 11%
of all muons that trigger the detector
come from angles less than
30
◦
above the horizon.
Furthermore, muons which are closer to horizontal (and
thus closer to the Moon) must travel farther before
reaching the detector. They need a minimum energy
to reach this far (see Fig. 1): the cosmic ray primaries
which produce them must have energies of at least 2 TeV.
Three typical months of data are shown in Fig. 2,
along with the position of the Moon above the horizon.
The dominant shape is from the strong increase in muon
flux with increasing angle above the horizon: as the
Moon rises, so do the event rates near the Moon. This
can be seen clearly in the correlation between the two
sets of curves. There is a secondary effect from the
layout of the 40 strings. One dimension of the detector
layout has the full width (approximately 1km) of the
completed detector, while the other is only about half
as long. When the Moon is aligned with the short axis,
fewer events pass the filter requirements. This causes the
12 hour modulation in the rate.
B. Optimization of offline event selection and search bin
size
A simulated data sample of
10
5
downgoing muon
events was generated using CORSIKA [8].
A set of cuts was developed using the following esti-
mated relation between the significance
S
, the efficiency
ψ
[degrees]
0
1
2
3
4
5
6
7
8
9
10
’)
ψ
’ PSF(
ψ
d
ψ
0
⌠
0
0.2
0.4
0.6
0.8
1
Fig. 3. The x-axis shows the angular difference
ψ
between the true
and reconstructed track. The y-axis shows the fraction of events with
this or lower angular error. The blue curve shows the event sample
after offline event selection, and the red curve shows the event sample
after online event selection.
η
of events passing the cut, and the resulting median
angular resolution
Ψ
med
of the sample:
S (cuts) ∝
?
η(cuts)
Ψ
med
(cuts)
(1)
Since the deficit is based on high statistics of events in
the search bin, this function provides a good estimator
for optimizing the significance.
The following cuts were chosen:
•
At least 6 DOMs are hit with light that hasn’t been
scattered in the ice, allowing a -15 nsec to +75 nsec
window from some minimal scattering.
•
Projected onto the reconstructed track, two of those
hits at least 400 meters apart.
•
The
1σ
estimated error ellipse on the reconstructed
direction has a mean radius less than
1.3
◦
.
The cumulative point spread function of the sample after
the above quality cuts is shown as the blue line in Fig. 3.
The size
Ψ
search
of the search bin is optimized for a
maximally significant observation using a similar
√
N
-
error based argument and the resulting relation, which
follows. Using the cumulative point spread function of
the sample after quality cuts, we have:
S (Ψ
search
) ∝
?
Ψ
search
0
P SF (ψ
′
)dψ
′
Ψ
search
(2)
Maximizing this significance estimator gives an optimal
search bin radius of
0.7
◦
. This analysis uses square bins
with an area equal to that of the optimized round bin,
with side length
1.25
◦
.
C. Calculating significance
To show that the data are stable in right ascension
α
,
we show, in Fig. 4, the number of events in the central
declination band. The errors shown are
√
N
. The average
of all bins excluding the Moon bin is 27747, which is
plotted as a line to guide the eye. The Moon bin has 852
events below this simple null estimate. This represents
a
5.2σ
deficit using
√
N
errors.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
(
α
evt
-
α
Moon
)cos(
δ
Moon
) [degrees]
-20
-10
0
10
20
events
27000
27500
28000
IceCube
IceCube preliminarypreliminary
Fig. 4. Number of events per
1.25
◦
square bin, relative to the position of the Moon. The declination of the reconstructed track is within
0.625
◦
bin from the declination of the Moon. The average of all bins except the Moon bin is shown as a redline to guide the eye.
-0.07 -0.39 -0.96 -0.05 -0.34 1.62 -0.72 -0.11 0.98 1.69 1.66 -0.14 -0.4 -1.5 0.63 1.17 0.75 -0.22 0.72 -0.04 0.44 -0.54 -0.88
-1.05 0.52 -0.69 1.94 0.41 1.3 -0.05 -0.06 -1.17 0.83 -0.43 -0.61 -1.59 -0.2 -1.26 0.38 0.36 1.23 -0.91 -0.41 -0.05 0.72 1.13
-1.27 0.67 -0.09 0.15 -2.16 0.2 1.13 0.45 0.55 0.63 -1.23 -1.14 -1.69 0.21 0.97 -0.75 0.33 -0.98 -0.74 -0.6 0.49 0.93 -0.27
-0.4 0.05 -0.07 0.02 -0.29 0.98 2.16 0.05 -0.74 -0.86 -0.37 -5.04 -1.89 -0.6 -0.64 0.9 -0.43 2.95 -1.06 -0.77 3.04 0.69 0.8
1.86 0.34 0.91 -0.92 0.79 -1.27 0.98 -0.56 0.07 -0.5 -1.17 -1.42 -1.52 -0.06 1.14 -0.64 0.59 -0.15 0.62 -0.75 0.89 1.02 0.65
0.81 1.47 -1.7 -0.28 -1.35 -1.31 -0.74 -1.4 -0.86 1.13 0.5 -0.82 2.84 1.68 -1.3 1.99 0.58 -2.03 -0.02 -0.22 -1.05 0.47 -2.31
0.02 -0.33 -0.26 0.98 0.29 1.06 -0.81 0.01 -1.22 -1.58 0.47 0.34 -0.46 -0.8 -1.22 -1.98 1.73 3.43 0.03 -0.31 0.86 -0.47 0.49
(
α
event
-
α
Moon
)cos(
δ
Moon
) [degrees]
-10
-5
0
5
10
[degrees]
event
δ
-
Moon
δ
-4
-3
-2
-1
0
1
2
3
4
-6
-4
-2
0
2
4
IceCube preliminary
6
Fig. 5. The significance of deviations in a region centered on the Moon.
Significance observed
-4
-2
0
2
4
# of bins
0
5
10
15
20
25
30
χ
2
/ ndf
9.715 / 9
Constant
30.09
±
3.18
Mean
-0.02742
±
0.08082
Sigma
0.9472
±
0.0633
Fig. 6. Each of the deviations shown in Fig. 5 is plotted here. The
deviations of the central 9 bins are shown in red. The surrounding bins
are shown with a black line histogram, and fit with a Gaussian curve.
Although this shows that the data are stable, this
error system is vulnerable to variations in small data
samples. Although we don’t see such variations here, we
considered it prudent to consider an error system which
takes into account the size of the background sample.
We used a standard formula from Li and Ma [9] for
calculating the significance of a point source:
S =
N
on
αN
off
?
α(N
on
+ N
off
)
.
(3)
where
N
on
is the number of events in the signal sample,
N
off
is the number of events in the off-source region, and
α
is the ratio between observing times on- to off-source.
We take
α
instead as the ratio of on- to off-source areas
observed, since the times are equal.
The above significance formula is applied to the Moon
data sample in the following way. The data are first
plotted in the standard Moon-centered equatorial coor-
dinates, correcting for projection effects with a factor of
cos(δ)
. The plot is binned using the
1.25
◦
× 1.25
◦
bin
size optimized in the simulation study. Each bin suc-
cessively is considered as an on-source region. There is
a very strong declination dependence in the downgoing
muon flux, so variations of the order of the Moon deficit
are only detectable in right ascension. Thus, off-source
regions are selected within the same zenith band as the
on-source region. Twenty off-source bins are used for
each calculation: ten to the right and ten to the left of
the on-source region, starting at the third bin out from
the on-source bin (i.e., skipping two bins in between).
III. RESULTS
For a region of 7 bins or
8.75
◦
in declination
δ
and 23
bins or
28.75
◦
in right ascension
α
around the Moon, the
significance of the deviation of the count rate in each bin
with respect to its off-source region was calculated, as
described in section II-C. The result is plotted in Fig. 5.
The Moon can be seen as the
5.0σ
deficit in the central
bin, at
(0, 0)
.
To test the hypothesis that the fluctuations in the back-
ground away from the Moon are distributed randomly
around 0, we plot them in Fig. 6. The central 9 bins,
including the Moon bin, are not included in the Gaussian
4
D.J. BOERSMA
et al.
ICECUBE MOON SHADOW
fit, but are plotted as the lower, shaded histogram. The
width of the Gaussian fit is consistent with 1; therefore,
the background is consistent with random fluctuations.
IV. CONCLUSIONS AND FUTURE PLANS
IceCube has observed the shadow of the Moon as
a
5.0σ
deviation from event counts in nearby regions,
using data from 8 of the total 13 lunar months in the data
taking period with the 40-string detector setup. From
this, we can conclude that IceCube has no systematic
pointing error larger than the search bin,
1.25
◦
.
In the future, this analysis will be extended in many
ways. First, we will include all data from the 40 string
detector configuration. We hope to repeat this analysis
using unbinned likelihood methods, and to describe the
size, shape, and any offset of the Moon Shadow. We will
then use the results of these studies to comment in more
detail on the angular resolution of various reconstruction
algorithms within IceCube. This analysis is one of the
only end-to-end checks of IceCube systematics based
only on experimental data.
LG acknowledges the support of a National Defense
Science and Engineering Graduate Fellowship from the
American Society for Engineering Education.
REFERENCES
[1] G.W. Clark,
Arrival Directions of Cosmic-Ray Air Showers from
the Northern Sky
October 15, 1957, Physical Review Vol 108,
no 2.
[2] A. Karle for the HEGRA collaboration,
The Angular Resolution
of the Hegra Scintillation Counter Array at La Palma
, Ann Arbor
1990 Proceedings, High Energy Gamma-Ray Astronomy 127-
131.
[3] Giglietto, N.
Performance of the MACRO detector at Gran Sasso:
Moon shadow and seasonal variations
, 1997, Nuclear Physics B
Proceedings Supplements, Volume 61, Issue 3, p. 180-184.
[4] M.O. Wasco for the Milagro collaboration,
Study of the Shadow
of the Moon and the Sun with VHE Cosmic Rays
, 1999
arXiv:astro-ph/9906.388v1
[5] The Soudan 2 collaboration,
Observation of the Moon Shadow in
Deep Underground Muon Flux
, 1999 arXiv:hep-ex/9905.044v1
[6] The Tibet AS Gamma Collaboration, M. Amenomori,
et al.
,
Multi-TeV Gamma-Ray Observation from the Crab Nebula Using
the Tibet-III Air Shower Array Finely Tuned by the Cosmic-Ray
Moon’s Shadow
, arXiv:astro-ph/0810.3757v1
[7] L3 Collaboration, P. Achard
et al.
,
Measurement of the Shad-
owing of High-Energy Cosmic Rays by the Moon: A Search
for TeV-Energy Antiprotons
Astropart.Phys.23:411-434,2005,
arXiv:astro-ph/0503472v1
[8] D. Heck, J. Knapp, J.N. Capdevielle, G. Schatz, T. Thouw,
CORSIKA: A Monte Carlo Code to Simulate Extensive Air
Showers
, FZKA 6019 (1998)
[9] Li, T.-P. and Ma, Y.Q.,
Analysis methods for results in gamma-
ray astronomy
1983, ApJ 272,317
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
IceCube/AMANDA combined analyses
for the search of neutrino sources at low energies
Cecile´
Roucelle
∗
, Andreas Gross
∗
, Sirin Odrowski
∗
, Elisa Resconi
∗
, Yolanda Sestayo
∗
for the IceCube Collaboration
†
∗
MPIK, Heidelberg, Germany
†
See the special section of these proceedings
Abstract. During the construction of IceCube, the
AMANDA neutrino telescope has continued to ac-
quire data and has been surrounded by IceCube
strings. Since the year 2007, AMANDA has been
fully integrated for data acquisition and joint Ice-
Cube/AMANDA events have been recorded. Because
of the finer spacing of AMANDA phototubes, the
inclusion of AMANDA significantly extends the de-
tection capability of IceCube alone for low energy
neutrinos (100 GeV to 10 TeV). We present the results
of two analyses performed on the 2007-2008 Icecube
(22 string) and AMANDA data. No evidence of high
energy neutrino emission was observed; upper limits
are reported.In 2008-09, IceCube acquired data in a
40 string configuration together with the last year of
operation of AMANDA. Progress on the analysis of
this new combined IceCube/AMANDA sample are
presented as well. In addition, a novel method to
study an extended region surrounding the most active
parts of Cygnus with these datasets is described here.
Keywords: Neutrino astronomy, galactic sources,
IceCube, AMANDA, DeepCore
I. INTRODUCTION
Recent detections by Cherenkov telescopes provide
evidence of particle acceleration up to TeV energies
in astrophysical sources [1]. The TeV γ-ray emission
from these sources could arise from the acceleration
of electrons (production of γ-rays via inverse Compton
scattering) or the acceleration of hadrons (production
of γ-rays through the decay of neutral pions produced
in pp/pγ interactions). In the later scenarios, the γ-
ray production would be accompanied by the neutrino
production since charged pions, like neutral pions, would
be generated and decay within the source. The detection
of high-energy neutrinos would thus be an unambiguous
proof for the acceleration of hadrons in these sources.
In particular, galactic TeV γ-ray sources present the
bulk of their γ-ray emission at energies lower than a
few TeV. The spectrum from these sources is soft with
a typical spectral index (|Γ| > 2) and often exhibits
an exponential cut-off at a few TeV. Both observations
suggest a break in the neutrino spectrum below 100
TeV. Accordingly, the flux from these sources would
differ from the standard spectral index of -2 for neu-
trino sources. Additionnaly, they represent “low energy”
sources (TeV) for IceCube and would be challenging
to detect. To enhance the sensitivity to this type of
sources, an analysis comprising both the IceCube and
the AMANDA detector has been performed. The higher
density of optical modules in AMANDA than in Ice-
Cube provides a sufficient increase in the number of
hits that reconstruction of low energy, neutrino-induced
events is possible. This increase in statistics particularly
benefits searches for sources with steeply falling spectra
(see Sec. III and Sec.IV). A first analysis has been
made using the 22 string configuration of IceCube in
combination with the AMANDA detector; the results
are presented in this proceeding. A new sample of data
has been collected with IceCube-40 and AMANDA and
is under analysis. We present here the general scheme
for this analysis, with particular emphasis on a specific
development to enhance the detection sensitivity for
extended active regions in the galactic plane.
II. GALACTIC SOURCES : THE γ-ν CONNECTION
Since neutrino and γ-rays are expected to be produced
together in hadronic acceleration processes, the neutrino
spectrum can be inferred from the observed γ-ray spec-
trum of the source by a two-step procedure:
1 - The γ-ray spectrum from a source is fitted as-
suming a pp interaction model obtained using the
parametrizations given in [4]. Possible γ-ray ab-
sorption is estimated and corrected for before the
fit.
2 - With the obtained proton distribution and the
target density, the expected neutrino spectrum is
estimated.
The Crab Nebula γ-ray energy spectrum has been mea-
sured in details by the H.E.S.S. experiment [8]. It is
described by a power law with spectral index (Γ) of
-2.4 and has a γ-ray energy cutoff at ∼14 TeV. Although
numerous arguments attribute the γ-ray production from
this source to e
+
/e
−
acceleration, its status as a standard
candle argues for its use as a reference for neutrino
astronomy. Moreover, the establishment of sufficiently
low upper limits by IceCube on the neutrino emission
could bring new constraints on the possible hadron ac-
celeration at this source. Assuming that γ-rays from the
Crab Nebula originate from hadronic processes (decay of
π
0
mesons generated from pp interactions at the source)
and that their absoption is negligible, the ν spectrum
obtained is:
Φ = 3 × 10
−7
e
−E/7TeV
(E/GeV)
−2.4
GeV
−1
cm
−2
s
−1
(1)
2
C. ROUCELLE et al. ICECUBE-AMANDA COMBINED ANALYSIS
In the following, we use this computed spectrum as
a reference (“Crab Nebula spectrum”) to estimate the
sensitivity of analyses to low energy sources.
III. ICECUBE-22/AMANDA: RESULTS
During the two deployment seasons 2003-2004 and
2004-2005 at the South Pole, the data acquisition system
(DAQ) of AMANDA was significantly upgraded to pro-
vide nearly deadtime-less operation and full digitization
of the electronic readout [2]. This was achieved by using
Transient Waveform Recorders (TWR). The new DAQ
system allowed for the reduction of the multiplicity trig-
ger threshold and, consequently, of the energy threshold
to ∼50 GeV. By being optimally sensitive to neutrinos
under 1 TeV, AMANDA thus complements IceCube well
and was integrated into the full IceCube analysis starting
in January 2006.
A. Data sample and methods
The IceCube 22-string run represents 276 days taken
between May 2007 and April 2008. Within this pe-
riod, the AMANDA detector was taking jointly with
IceCube for 143 days. Nevertheless, since the 2006-07
deployment season, every time the AMANDA detector is
triggered, a readout request is sent to the IceCube detec-
tor. Events are then merged for processing. The trigger
rates are strongly dominated by downgoing, atmospheric
muons produced in cosmic ray air showers above the
detector. They outnumber atmospheric neutrinos by a
factor ∼ 10
6
. This background is largely eliminated
by limiting the analysis to upgoing muons using a fast
reconstruction algorithm which is applied to all of the
data. The selected events are then further pared down by
applying a cpu-intensive, likelihood-based reconstruc-
tion algorithm that accounts for the properties of the
ice and then cutting on the fit direction and fit quality
parameters. In this analysis, these cuts were optimized
to obtain the best discovery potential for a source with a
“Crab Nebula” spectrum (Eqn. 1). As low energy events
are mainly due to the dominant atmospheric neutrino
background, a significantly larger number of events is
obtained with this selection than with other IceCube-22
point source searches [12].
In total, 8727 events are selected, of which 3430
are combined IceCube/AMANDA events. Despite the
smaller size of AMANDA (1/6 of the volume of
IceCube-22) and its shorter livetime (less than 60%
wrt. IceCube-22), the contribution of AMANDA to the
combined detector sample, particularly at low energies,
is clearly visible in the energy distribution simulated at-
mospheric neutrinos retained at the final event selection
in the analysis (Fig. 1). As a consequence, the sensitivity
achieved with this approach for a source with a spectrum
similar to the one expected for the Crab Nebula (Γ=-2.4;
cut-off at 7 TeV) is better than the one achieved with
the IceCube only analysis (Fig. 2). Even though, for a
harder spectrum (Γ=-2;no cutoff), the standard IceCube
only analysis remains better adapted.
log (E/GeV)
1
1.5
2
2.5
3
3.5
4
4.5
5
5.5
MC)
ν
#events (conv. atm
0
100
200
300
400
500
combined (IC22+AMANDA)
AMANDA
IC22, no AMANDA
Fig. 1. Event energy distribution for simulated atmospheric neutrinos
at the final level of the galactic point source analysis normalized to the
livetime of the IceCube 22 strings data taking (276 days) for IceCube
only events and to the combined IceCube+AMANDA livetime (143
days) for the AMANDA and combined events.
log(E/GeV)
2
10
3
10
4
10
5
10
6
10
]
-1
s
-2
[GeV cm
Φ
2
E
-8
10
-7
10
-6
10
std. analysis, Crab-like spectrum
Low E analysis, Crab-like spectrum
-2
spectrum
std. analysis, E
-2
spectrum
Low E analysis, E
Fig. 2.
Sensitivity for a source spectrum with Γ=-2 and a “Crab”
spectrum (Γ=-2.4; cut-off at 7 TeV). This analysis (gray) is compared
to the standard IceCube only analysis (black).
B. Search on an a priori selected list of point sources
With this dataset, a search for neutrino emission was
performed for a list of four, preselected sources: the Crab
Nebula, Cas A, SS 433 and LS I +61 303. For three of
them, the γ-ray spectrum is known ([8]-[11]), so we
optimized the analysis for the expected corresponding
neutrino spectrum (for SS 433, which has no measured
γ-ray spectrum, the optimisation was made with respect
to a test spectrum with a spectral index Γ=-2.4 and a
cut-off at 7 TeV). The test-statistic for the analysis is
the log likelihood ratio of the signal hypothesis with
best fit parameters to the pure background hypothesis.
This method is widely used in IceCube [7]. This test-
statistic provides an estimate for the significance of a
deviation from background (pre-trial p-value) at a posi-
tion in the sky. The post-trial p-value is then determined
by applying the analysis to randomized samples. With
this method, the lowest pre-trial p-value (p=0.14) was
obtained for the Crab Nebula. This p-value or a lower
one can be achieved in 37% of randomized samples.
This excess is therefore not significant. The number of
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
signal events detected and their associated pre-trial p-
values are summarized in the table below. Based on the
γ-ray observations, the expected neutrino spectral index
and possible cut-off energies have been calculated using
the method described in Sec. II and are indicated in the
same table.
Source
Γ
ν
ν cut-off
Nb. of signal
p-value
events
(pre-trial)
Crab Nebula
-2.39
7 TeV
3.3
0.14
Cas A
-2.4
-
-1.9
0.65
SS433
-
-
-0.9
0.67
LSI+61 303
-2.8
-
-0.4
0.47
E [GeV]
3
10
4
10
5
10
]
-1
s
-2
dN/dE [GeVcm
2
E
-9
10
-8
10
-7
10
-6
10
-5
10
Crab reference neutrino spectrum
Crab (IC22+AMANDA upper limit)
E [GeV]
3
10
4
10
5
10
]
-1
s
-2
dN/dE [GeVcm
2
E
-13
10
-12
10
-11
10
CasA (IC22+AMANDA upper limit)
CasA reference neutrino spectrum
Fig. 3. Top: Crab upper limit obtained for this study compared to
the reference neutrino spectrum computed for the Crab as in section II
using [4] for modeling. Bottom: Cas A upper limit obtained for this
study compared to its reference neutrino spectrum.
Upper limits on the neutrino flux were derived from
the number of events observed in the direction of the dif-
ferent sources with this analysis. The limits obtained for
the Crab Nebula and Cas A are presented in Fig. 3 and
compared to their expected neutrino spectrum. The limit
that can be set by this IceCube-22/AMANDA analysis
is for example for the Crab Nebula a factor 18.9 above
the expected reference spectrum. This calculation was
also made for the case of Cas A (Fig. 3, bottom). This
source was detected by HEGRA up to 10 TeV without
evidence of high energy cut-off [9]. We extrapolate the
power-law γ-ray spectrum given in [10] up to higher
energies.
Fig. 4. Galactic plane scan (longitude : 31.5
◦
< l < 214.5
◦
, latitude
: -3
◦
< b < 3
◦
) pretrial significance map for IceCube-22/AMANDA.
The strongest excess at l=75.875
◦
, b=2.675
◦
(pre-trial p-value =
0.0037). 95% of randomized datasets yielded a more significant excess.
C. Galactic plane scan
In addition to these sources, we performed an un-
binned point source search of the galactic plane in
the nominal field of view of IceCube (longitude :
31.5
◦
<l<214.5
◦
, latitude -3
◦
<b<3
◦
). The result of this
search is shown Fig. 4. The most significant deviation
from background observed in this galactic plane unbi-
ased search is seen at l=75.875
◦
, b=2.675
◦
in galactic
coordinates. The pre-trial p-value at this location is
0.0037. For 95% of the randomized datasets (reproduc-
ing a pure background hypothesis) an equal or lower
probability is found and thus the observed excess is not
significant.
IV. ICECUBE-40/AMANDA: EXPECTATIONS
A. Data sample
For the dataset acquired between April 17, 2008 and
February 2nd 2009 with the IceCube 40 strings configu-
ration, the total livetime of the IceCube was 268.7 days,
and the AMANDA sub-detector performed much better
than for the 2007/8 season with a total livetime of 240
days on the same period, corresponding to almost 90%
of the IceCube livetime. As a consequence, even with
the doubling of the size of IceCube, the relative number
of combined IceCube-40/AMANDA events compared to
the IceCube-40 only events remain comparable to the
ratios obtained with the IceCube-22/AMANDA dataset.
The data is still under processing for the selection of
neutrino candidates and final exploitation will be made
in the near future. Beyond replicating the galactic plane
scan and the search for the same list of a priori selected
sources with these new data, we will search for multiple
unresolved sources in the Cygnus region applying a new
analysis strategy.
B. Extended sources: Multi-Point Source analysis
A particular interest is given to active regions of
the galactic plane, where several accelerators might
contribute to a possible neutrino signal. The Cygnus
region is a very active star-forming region located at
4
C. ROUCELLE et al. ICECUBE-AMANDA COMBINED ANALYSIS
dfinal
Entries 7898669
Mean
48.26
RMS
30.69
angular distance (degrees)
0
20
40
60
80
100
2
2.5
3
3.5
4
4.5
5
dfinal
Entries 7898669
Mean
48.26
RMS
30.69
parejas final
Fig. 5. Number of event pairs (distant of less than 2
◦
) for the signal
case divided by the average histogram of random cases with the MPS
method for a simulated case presenting 3 sources (yielding each 8
events in the detector) randomly distributed in a region of 11
◦
x7
◦
.
galactic longitude 65
◦
< l < 85
◦
. Recently, the Mi-
lagro collaboration measured both a diffuse TeV γ-ray
emission and a bright, extended TeV source [5]. These
observations suggest the presence of cosmic rays sources
which accelerate hadrons that subsequently interact with
the local, dense interstellar medium to produce γ-rays
and possibly neutrinos through pion decay. Estimates of
the neutrino emission from the zone of diffuse γ-ray
emission are reported in [6].
The current point source search method is optimized
for resolveable sources. However, to study extended re-
gions like the Cygnus region, this method is not optimal.
A better analysis for these cases takes advantage of the
possibility of clustering of neutrino events in the totality
of the region to improve the detection probability. In
this multi-point source (MPS) analysis, we construct a
two-point correlation function in which each neutrino
candidate that pointed inside the region of study is paired
with all other neutrino candidates. A test statistic is then
obtained from the number of “close” pairs for which
the angular separation is at most 2 degrees, the bin
size for achieving the best signal to noise ratio (for
IceCube-22/AMANDA data). An excess in the number
of these close pairs would indicate an emission from
astrophysical sources in the chosen region. This method
is sensitive not only to clustered signal that would come
from a single source, but also would take advantage of
the presence of a diffuse signal.
To illustrate the potential of this method, we give
an example of its performance for the IceCube-
22/AMANDA configuration. Using the point-spread
function obtained from the data (median value: 1.5
◦
), we
inserted simulated neutrino events from three possible
sources in the IceCube-22/AMANDA dataset. Each sim-
ulated source yielded eight events in the detector and was
positioned randomly within a region of 11
◦
x7
◦
centered
around Cygnus. Fig. 5 shows the histogram of event
pairs for the signal case divided by the average histogram
of random cases. The first bin thus corresponds to the ex-
cess of “close pairs”. In order to evaluate the significance
associated with this excess, the number of close pairs in
10
7
scrambled sky maps is used. The excess obtained in
this example has a p-value of 3×10
−7
, corresponding to
a 5σ detection. For the same configuration, the standard
point source analysis [12] is less sensitive as it would
require 11 events from each of the sources to reach a
detection at the 5σ level (instead of just 8). This analysis
will be applied to the unblinded data for IceCube-
22/AMANDA and IceCube-40/AMANDA in the near
future. For IceCube-22/AMANDA, we will use a region
surrounding the most active sources observed by Milagro
on Cygnus to define our primaries (72
◦
< l < 83
◦
; -3
◦
<
b < 4
◦
).
V. CONCLUSION AND OUTLOOK
Numerous galactic sources observed with γ-rays
present a soft spectrum and possibly a cut off at an
energy E <100 TeV. Under the hypothesis that accel-
eration of hadrons explains the γ-ray emission, the as-
sociated neutrino spectrum should exhibit a similar cut-
off. The merging of the AMANDA and IceCube detector
offers an enhancement in sensitivity for the search for
these sources. The results of the IceCube-22/AMANDA
configuration show no significant excess either for a
systematic galactic plane scan on the parts visible for
IceCube or for a list of a priori selected sources. The
data acquired with the IceCube-40/AMANDA configura-
tion are under study and an additional analysis allowing
the investigation of the extended Cygnus region will be
added. The AMANDA detector, which was shut down
on May 15, 2009 as part of the startup of the physics run
for the IceCube 59-string configuration detector, paved
the road for the development of a nested, higher gran-
ularity detector array within IceCube. A new detector
array of this type, called “IceCube DeepCore”, is under
construction [13]. It will consist of at least six strings
instrumenting the deep ice (below 2100m) deployed in
the center of IceCube and will be completed during the
2009-2010 deployment season.
REFERENCES
[1] F. A. Aharonian et al., 2006a, ApJ, 636, 777
[2] W. Wagner, [AMANDA Coll.], ICRC 2003
[3] A. Gross, C. Ha, C. Rott, M. Tluczykont, E. Resconi,
T. DeYoung, G. Wikstrom,¨ [IceCube Coll.], ICRC 2007,
arXiv:0711.0353
[4] S. R. Kelner et al. 2006, PhRvD, 74, 034018
[5] A. A. Abdo et al, ApJ, 688, 1078, arXiv:0805.0417
[6] S. Gabici, A. M. Taylor, R. J. White, S. Casanova, F. A.
Aharonian, Astropart.Phys. 29 (2008) 180. arXiv:0806.2459
[7] J.Braun et al., Astropart. Phys. 29 (2008) 299.
[8] F. Aharonian et al. [H.E.S.S. coll], A&A 457 (2006)899.
[9] F. Aharonian et al. [H.E.S.S. coll], A&A, 370 (2001)112
[10] J. Albert et al., A&A, 474, (2007)937
[11] J. Albert et al., 2006, Sci, 312, 1771
[12] R. Abbasi et al. [IceCube coll], under pub., arXiv:0905.2253
[13] C. Wiebusch [IceCube coll.] these proceedings
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
AMANDA 7-Year Multipole Analysis
Anne Schukraft
∗
, Jan-Patrick Hulߨ
∗
for the IceCube Collaboration
†
∗
III. Physikalisches Institut, RWTH Aachen University, Germany
†
see the special section of these proceedings
Abstract. The multipole analysis investigates the
arrival directions of registered neutrino events in
AMANDA-II by a spherical harmonics expansion.
The expansion of the expected atmospheric neutrino
distribution returns a characteristic set of expansion
coefficients. This characteristic spectrum of expan-
sion coefficients can be compared with the expansion
coefficients of the experimental data. As atmospheric
neutrinos are the dominant background of the search
for extraterrestrial neutrinos, the agreement of ex-
perimental data and the atmospheric prediction can
give evidence for physical neutrino sources or sys-
tematic uncertainties of the detector. Astrophysical
neutrino signals were simulated and it was shown
that they influence the expansion coefficients in a
characteristic way. Those simulations are used to
analyze deviations between experimental data and
Monte Carlo simulations with regard to potential
physical reasons. The analysis method was applied on
the AMANDA-II neutrino sample measured between
2000 and 2006 and results are presented.
Keywords: Neutrino astrophysics, Anisotropy,
AMANDA-II
I. INTRODUCTION
The AMANDA-II neutrino detector located at South
Pole was constructed to search for astrophysical neutri-
nos. These neutrinos could originate from many different
Galactic and extragalactic candidate source types such
as Active Galactic Nuclei (AGN), supernova remnants
and microquasars. The detection of neutrinos is based on
the observation of Cherenkov light emitted by secondary
muons produced in charged current neutrino interactions.
This light is observed by photomultipliers deployed in
the Antarctic ice. Their signals are used to reconstruct
the direction and the energy of the primary neutrino.
AMANDA-II took data between 2000 and 2006. The
background of atmospheric muons is reduced by select-
ing only upward-going tracks in the detector, as only
neutrinos are able to enter the detector from below. This
restricts the field of view to the northern hemisphere.
The data is filtered and processed to reject
misreconstructed downward-going muon tracks [1]. The
final data sample contains 6144 neutrino induced events
between a declination of 0
◦
and +90
◦
with a purity of
> 95% away from the horizon.
II. ANALYSIS PRINCIPLE
The idea of this analysis is to search for deviations of
the measured AMANDA-II neutrino sky map from the
expected event distribution for atmospheric neutrinos,
which constitute the main part of the data sample [2].
A method to study such anisotropies is a multipole
analysis, which was also used to quantify the Cosmic
Microwave Background fluctuations. The analysis is
based on the decomposition of an event distribution
f(θ, φ) =
P
N
events
i=1
δ(cos θ
i
− cos θ)δ(φ
i
− φ) into
spherical harmonics Y
m
l
(θ, φ), where θ and φ are the
zenith and azimuth of the spherical analysis coordinate
system. The expansion coefficients are
a
m
l
=
Z
2π
0
dφ
Z
1
−1
d cos θ f(θ, φ)Y
m∗
l
(θ, φ).
(1)
They provide information about the angular structure of
the event distribution f(θ, φ). The index l corresponds
to the scale of the angular structure δ ≈
180
◦
l
while
m gives the orientation on the sphere. The expansion
coefficients with m = 0 depend only on the structure in
the zenith direction of the analysis coordinate system.
Averaging over the orientation dependent a
m
l
yields the
multipole moments
C
l
=
1
2l +1
X
+l
m=−l
|a
m
l
|
2
.
(2)
They form an angular power spectrum characteristic for
different input neutrino event distributions.
The initial point of this analysis is the angular power
spectrum of only atmospheric neutrino events. There-
fore, neutrino sky maps containing 6144 atmospheric
neutrino events according to the Bartol atmospheric
neutrino flux model [3] are simulated and numerically
decomposed with the software package GLESP [4].
Statistical fluctuations are considered by averaging over
1000 random sky maps, resulting in a mean ?C
l
? and a
statistical spread σ
C
l
of each multipole moment.
The same procedure is applied to simulated sky maps
containing atmospheric and different amounts of signal
neutrinos with a total event number of likewise 6144
events. The influence of the signal neutrinos on the
angular power spectrum is studied in terms of the pulls
d
l
=
?C
l
? − ?C
l,atms
?
σ
C
l,atms
.
(3)
2
SCHUKRAFT et al. AMANDA MULTIPOLE ANALYSIS
l
0
10
20
30
40
50
60
70
80
l,atms
C
σ
>)/
l,atms
> - <C
l
(<C
-6
-4
-2
0
2
4
N
Sources
(#neutrinos)
0 (0)
200 (519)
400 (1038)
600 (1557)
(a)
l
0
10
20
30
40
50
60
70
80
0
l,atms
a
σ
>)/
0
l,atms
> - <a
0
l
(<a
-6
-4
-2
0
2
4
6
fraction (#neutrinos)
0 (0)
0.02 (122)
0.04 (245)
0.06 (368)
0.08 (491)
(b)
Fig. 1. (a): Pull plot for the multipole moments C
l
of the isotropic point source model. Sources are simulated with a mean source strength
µ = 5 and an E
−2
ν
energy spectrum. The number of sources N
sources
on the full sphere is varied. The corresponding number of signal
neutrinos on the northern hemisphere is given in brackets. The errors bars are hidden by the marker symbols. (b): Pull plot for the expansion
coefficients a
0
l
of the cosmic ray interaction model with the Galactic plane in Galactic coordinates. The fraction of neutrinos in the sky map
originating from the Galactic plane is varied. The corresponding number of signal neutrinos is given in brackets. The errors bars are hidden by
the marker symbols.
III. SIGNAL SIMULATION
The different models for candidate neutrino sources
investigated in this analysis are:
1) Isotropically distributed point sources
2) A diffuse flux from FR-II galaxies and blazars [5]
3) AGN registered in the Veron-Cetty´
and Veron´
(VCV) catalog [6]
4) Galactic point sources such as supernova remnants
or microquasars
5) Cosmic rays interacting in the Galactic plane.
All simulated pointlike neutrino sources are character-
ized by a Poissonian distributed source strength with
mean µ and an energy spectrum E
−γ
ν
. The relative angu-
lar detector acceptance depends on the neutrino energy
and therefore on the spectral index of the simulated neu-
trino source. Signal neutrinos are simulated according to
this acceptance considering systematic fluctuations. The
total number of signal neutrinos in a sky map of the
northern hemisphere with N
sources
simulated sources on
the full-sky is therefore given by ∼ 0.5 · µ · N
sources
.
Additionally the angular resolution is taken into account.
It dominates over the uncertainty between the neutrino
and muon direction.
The spectral index of pointlike sources is varied
between 1.5 ≤ γ ≤ 2.3. As the spectral index of
atmospheric neutrinos is close to 3.7, signal and back-
round neutrinos underlie different angular detector ac-
ceptances. Thus, additionally to the clustering of events
around the source directions also the shape of the total
angular event distribution is used to identify a signature
of signal neutrinos in the angular power spectrum [7].
Neutrinos from our Galaxy disturb the atmospheric
event distribution by their bunching within the Galactic
plane modeled by a Gaussian band along the Galactic
equator. Neutrinos produced in cosmic ray interactions
with the interstellar medium of our Galaxy are assumed
to follow the E
−2.7
primary energy spectrum.
A further topic (model 6) that can be studied with a
multipole analysis are neutrino oscillations. The survival
probability of atmospheric muon neutrinos depends
on the neutrino energy and the traveling length of the
neutrino as well as the mixing angle θ
23
and the squared
mass difference ∆m
2
23
. The traveling length can be
expressed by the Earth’s radius and the zenith angle of
the neutrino direction [7]. Thus, the neutrino oscillations
disturb the angular event distribution of atmospheric
neutrinos. With the assumption of sin
2
(2θ
23
) ≈ 1 the
squared mass difference remains for investigation. Due
to the relatively high energy threshold of 50 GeV the
effect is small.
IV. EVALUATION OF THE POWER SPECTRA
The deviations from a pure atmospheric angular power
spectrum caused by signal neutrinos are studied by the
pulls. These pulls are exemplarily shown in Fig. 1a for
the model of isotropic point sources. The behaviour
of the pulls is characteristic for each signal model.
Different multipole moments carry different sensitivity
to the neutrino signal. The absolute value of the pull
increases linearly with the amount of signal neutrinos in
the sky maps. Each pull has a predefined sign.
The deviation of a particular sky map with multipole
moments C
l
from the pure atmospheric expectation
?C
l,atms
? is quantified by a significance indicator D
2
defined as
D
2
=
1
l
max
l
X
max
l=1
sgn
l
· w
l
·
μ
C
l
− ?C
l,atms
?
σ
C
l,atms
¶
2
, (4)
where l
max
determines the considered multipole mo-
ments. The term in brackets is the pull between the
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
particular sky map and the mean of the atmospheric
expectation as defined in Eq. 3. The factors
w
l
=
?C
l
? − ?C
l,atms
?
q
σ
2
C
l
+ σ
2
C
l,atms
(5)
are defined to weight the pulls according to their ex-
pected sensitivity to the signal. For each neutrino signal
model one dedicated set of weights w
l
is determined.
Due to the linear increase of the pulls with the signal
strength the strength chosen to calculate w
l
is arbitrary.
The weight factors w
l
carry the expected sign of the
pulls. sgn
l
is the sign of the measured pull. Thus, the
D
2
calculated for the particular sky map is increased if
the observed deviation has the direction expected for the
signal model and reduced otherwise.
Due to the weighting of the pulls, the sensitivity
becomes stable for high l
max
. A choice of l
max
= 100
is sufficient to provide best sensitivity to all investigated
signal models.
The D
2
of a sky map is interpreted physically by the
use of confidence belts. Therefore, 1000 sky maps for
every signal strength within a certain range are simu-
lated and the D
2
-value for each sky map is calculated
separately to obtain the D
2
distributions. The calculation
of the average upper limit at 90% confidence level
assuming zero-signal is used to estimate the sensitivity
of the analysis to different astrophysical models apriori.
As the multipole analysis is applied to a wide range
of astrophysical topics, the trial factor of the analysis
becomes important. The trial factor raises with each new
set of weights used to evaluate the experimental data.
For this reason, models with almost similar weights are
combined to a common set of weights and only six sets
are remaining.
If the signal signatures show up only in the zenith
direction of the analysis coordinate system the expansion
coefficients a
0
l
are more sensitive than the multipole
moments C
l
. The reason is, that the expansion coeffi-
cients with m = 0 are independent from the azimuth
φ and contain the pure information about the zenith
direction θ. A signal only depending on θ causes only
statistical fluctuations but no physical information in
the other expansion coefficients. Therefore, the signal
has only power in the a
0
l
. The analysis method stays
exactly the same in these cases, except that all C
l
are
replaced by the a
0
l
. This is related to the models of
neutrinos from the Galactic plane and from sources of
the VCV catalog, which show north-south-symmetries
of the neutrino signals in Galactic and supergalactic
coordinates, respectively. Unlike the multipole moments
C
l
, the a
0
l
do not average over different orientations.
Therefore, the analysis of the a
0
l
strongly depends on the
used coordinate system. An example for pulls of a
0
l
for
the model of a diffuse neutrino flux from the Galactic
plane is shown in fig. 1b. The characteristic periodic
behavior of the pulls is explained by the symmetry
properties of the spherical harmonics.
l
0
2
4
6
8
10
12
14
16
18
20
l,atms
C
σ
>)/
l,atms
- <C
l
(C
-6
-4
-2
0
2
4
6
Pulls experimental data
Pulls for 600 mini sources (μ = 5)
Pulls for 100000 nano sources (μ = 0.02)
Fig. 2. Pull plot for the experimental multipole moments C
l
. Expected
pulls for typical model parameters of isotropic point sources are shown
for comparison. The error bars symbolize the statistical fluctuation
expected for an atmospheric neutrino sky map.
l
0
2
4
6
8
10
12
14
16
18
20
0
l,atms
a
σ
>)/
0
l,atms
- <a
0
l
(a
-5
-4
-3
-2
-1
0
1
2
3
4
5
Pulls experimental data
Pulls
ν
from galactic plane (fraction 2%)
Fig. 3. Pull plot for the experimental expansion coefficients a
0
l
in
Galactic coordinates. Expected pulls for typical parameters of cosmic
ray interactions with the Galactic plane are shown for comparison.
The error bars symbolize the statistical fluctuation expected for an
atmospheric neutrino sky map.
V. EXPERIMENTAL RESULTS
The experimental data is analyzed in two steps. First,
the experimental data is tested for its compatibility with
the pure atmospheric neutrino hypothesis. Secondly, the
experimental pulls are compared with the expectations
for the different investigated neutrino models.
The pulls of the experimental data are shown for the
multipole moments C
l
in Fig. 2 and for the expansion
coefficients a
0
l
in Galactic coordinates in Fig. 3. To
compare the measured data with the expected event dis-
tribution, a D
2
is calculated for the multipole moments
C
l
and the expansion coefficients a
0
l
for transformations
into equatorial, Galactic and supergalactic coordinates
separately. As no signal model is tested sgn
l
= w
l
= 1
is assumed. A comparison with the corresponding D
2
distributions results in the p-values giving the proba-
bility to obtain a D
2
which is at least as extreme as
the measured one assuming that the pure atmospheric
neutrino hypothesis is true (Table I).
4
SCHUKRAFT et al. AMANDA MULTIPOLE ANALYSIS
The statistical consistency of C
l
and a
0
l
in equa-
torial coordinates with the atmospheric expectation is
marginal. Rotating to inclined coordinate systems, e.g.
Galactic and supergalactic, the consistency improves.
The deviation from the pure atmospheric expectation is
not compatible with any of the signal models (see Fig.
2, 3 for examples). The discrepancy may be attributed
to uncertainties in the theoretical description of the
atmospheric neutrino distribution, or to a contribution
of unsimulated background of down-going muons mis-
reconstructed as up-going, or to the modeling of prop-
erties of the AMANDA detector.
TABLE I
P-VALUES FOR THE COMPATIBILITY OF EXPERIMENTAL DATA AND
PURE ATMOSPHERIC NEUTRINO HYPOTHESIS.
Observable
p-value
C
l
0.02
a
0
l
, Equatorial
0.02
a
0
l
, Galactic
0.15
a
0
l
, Supergalactic
0.70
The signal models are tested by calculating the D
2
-
values of the experimental data using the corresponding
sign and weight factors. As the observed deviations do
not fit any of the investigated signal models the physical
model parameters are constrained. Due to the observed
systematic effects affecting mainly the multipole mo-
ments C
l
and the equatorial expansion coefficients a
0
l
no limits on the models analyzed in the corresponding
coordinate systems (models 1, 2 and 6) are derived. The
other models are less affected. The limits given below
do not include these systematic effects.
A limit on the source strength assuming the VCV
source distribution (model 3) is calculated for those
sources closer than 100 Mpc to the Earth. In this model
all sources are expected to have the same strength and
energy spectrum. For a typical spectral index of γ = 2
the average source flux is limited by the experimental
data to a differential source flux of dΦ/dE · E
2
≤ 1.6 ·
10
−10
GeV cm
−2
s
−1
sr
−1
in the energy range between
1.6 TeV and 1.7 PeV.
For the random Galactic sources (model 4), the
number of sources is constrained assuming the same
source strength and energy spectrum for all sources
as well. For a spectral index of γ = 2, the limit
on the number of sources is set by AMANDA to
N
sources
≤ 39 assuming a source strength of dΦ/dE ·
E
2
≤ 10
−8
GeV cm
−2
s
−1
sr
−1
or N
sources
. 4300 for
sources with dΦ/dE · E
2
≤ 10
−10
GeV cm
−2
s
−1
sr
−1
.
For source fluxes in between the limit can be ap-
proximated by assuming linearity between N
sources
and
log
¡
dΦ/dE · E
2
¢
.
The differential flux limit obtained from the
experimental data on the diffuse neutrino flux from
cosmic ray interactions in the Galactic plane (model 5)
is dΦ/dE · E
2.7
≤ 3.2 · 10
−4
GeV
1.7
cm
−2
s
−1
sr
−1
.
This flux limit is shown in Fig. 4 together with the
results of two other AMANDA analyses and two
Energy [GeV]
2
10
3
10
4
10
5
10
6
10
7
10
8
10
9
10
]
-1
sr
-1
s
-2
cm
1.7
[GeV
2.7
/dE * E
Φ
d
-7
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
Limit Multipole 7yr
Limit Multipole 4yr
Limit J. Kelley 4yr AMANDA-II
Prediction Gaisser, Halzen & Stanev
Prediction Ingelman & Thunman
Fig. 4. Limit of the 7yr multipole analysis on the diffuse neutrino flux
from cosmic ray interactions in the Galactic plane in dependence of
the valid energy range. The limit is compared with two other analyses
[2], [8] and two theoretical predictions [9], [10].
theoretical flux predictions. The seven year multipole
analysis provides currently the best limit. However, it
is still not in reach of the theoretical predictions.
VI. CONCLUSION
It is shown that the multipole analysis is sensitive to
a wide range of physical topics. Its area of application
is in particular the field of many weak sources in
transition to diffuse fluxes. With the statistics of seven
years of AMANDA data and improvements of the
analysis technique the method is now restricted by
systematic uncertainties in the atmospheric neutrino
zenith distribution of the order of a few percent.
Transforming to coordinate systems less affected by
the equatorial zenith angle such as the Galactic and
supergalactic system physical conclusions are still
possible. A compatibility of the measurement with
the background expectation of atmospheric neutrinos
is observed. Current efforts to better understand the
observed systematics would allow an application of the
multipole analysis on future high statistic IceCube data.
REFERENCES
[1] R. Abbasi et al., Phys. Rev. D 79, 062001 (2009).
[2] J.-P. Hulß,¨
Ch. Wiebusch for the IceCube Collaboration, 30
th
International Cosmic Ray Conference (ICRC 2007), Merida,
arXiv:0711.0353.
[3] G. Barr et al., Phys. Rev. D 70, 023006 (2004).
[4] Doroshkevich et al., International Journal of Modern Physics D,
Vol 14, No. 2 (2005), http://www.glesp.nbi.dk/.
[5] J. Becker, P. Biermann, W. Rhode, Astroparticle Physics, Vol
23, No. 4 (2005).
[6] M.-P. Veron-Cetty´
, P. Veron,´
A&A, 455, 733 (2006).
[7] A. Schukraft, Multipole analysis of the AMANDA-II neutrino
skymap, diploma thesis, RWTH Aachen (2009).
[8] J. Kelley for the IceCube Collaboration, 29
th
International
Cosmic Ray Conference (ICRC 2005), Pune, arXiv:0711.0353.
[9] T. Gaisser, F. Halzen, T. Stanev, Phys. Rept., 258:173-236 (1995).
[10] G. Ingelman, M. Thunman, arxiv:hep-ph/9604286 (1996).
This contribution is supported by the German Academic Exchange
Service (DAAD).
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Measurement of the atmospheric neutrino energy spectrum with
IceCube
Dmitry Chirkin
∗
for the IceCube collaboration
†
∗
University of Wisconsin, Madison, U.S.A.
†
See the special section of these proceedings.
Abstract
. The IceCube detector, as configured dur-
ing its operation in 2007, consisted of 22 deployed
cables, each equipped with 60 optical sensors, has
been the biggest neutrino detector operating during
the year 2007, superseded only by its later config-
urations. A high quality sample of more than 8500
atmospheric neutrinos was extracted from this single
year of operation and used for the measurement
of the atmospheric muon neutrino energy spectrum
from 100 GeV to 500 TeV discussed here. Several
statistical techniques were used in an attempt to
search for deviation of the neutrino flux from that
of conventional atmospheric neutrino models.
Keywords
: atmospheric neutrinos, charm search,
IceCube
I. INTRODUCTION
Most of the events recorded by the IceCube detector
constitute the background of atmospheric muons that
are produced in air showers. Once this background is
removed the majority of events that remain are atmo-
spheric neutrino events, i.e., (mostly) muons created
by atmospheric neutrinos. Although much smaller, this
also constitutes background for the majority of research
topics in IceCube (e.g., extra-terrestrial neutrino flux
searches), except one: the atmospheric neutrino study.
As part of this study we verify that the atmospheric
neutrinos observed by IceCube are consistent with pre-
vious measurements at lower energies, and agree with
the theoretical extrapolations at higher energies. Since
much uncertainty remains in the description of the
higher energy atmospheric neutrinos, this study could
provide interesting constraints on (not yet observed)
charm contribution to the atmospheric neutrino produc-
tion. Since such charm contribution may affect the flux
of atmospheric neutrinos in a way similar to extra-
terrestrial diffuse contributions, we attempt to look for
both simultaneously in a single likelihood approach.
II. EVENT SELECTION
For this analysis the new machine learning method
(
SBM
) described in [1] was employed. The quality
parameters used with the event selection method of this
paper include and build upon those discussed previously
in [2]. Unfortunately the size limit of this proceeding
precludes us from discussing all of the event selection
quality parameters and techniques; instead we describe
one new technique in detail below.
78
21
46
50
72
74
29
30
40
56
59
67
73
38
47
65
39
49
66
48
57
58
x
[
m
]
y
[
m
]
-300
-200
-100
0
100
200
300
400
500
600
-100 0 100 200 300 400 500 600 700
Fig. 1. View of the IceCube 22 string configuration, as used in the
run of 2007. The size of the circle and color indicate the relative string
weight, used to compute several quality parameters, such as the size
of the veto region for contained events, or the total weight, which,
much like the number of hit strings, gauges the size of an event and
its importance for the analysis.
Events in IceCube are normally formed by the DAQ
by combining all hits satisfying the simple majority trig-
ger. The simple majority trigger is defined to combine
all hits, which belong to one or more hit sets of at
least
n
different-channel hits within
w
ns of each other.
Typically
n = 8
or more hits are required to be within
w = 5
us of each other to satisfy this trigger.
The simple majority trigger combines hits into events
only separating them in time. In IceCube a substantial
fraction of events so formed turns out to consist of
hits originating from two or more separate particles, or
bundles of particles, typically unrelated to each other,
traveling through well separated (in space) parts of the
detector. In order to split up such events and to keep the
rate of coincident (now in both time and space) events
low, hits in the events were recombined via the use of
the
topological trigger
. The definition of this trigger is
very similar to that of the simple majority trigger given
above: the
topological trigger
combines all
topologically
connected
hits, which belong to one or more hit sets
of at least
n
different-channel hits within
w
ns of each
other. Two hits are called
topologically connected
if they
satisfy all of the following (the numbers in italics show
the values used in the present analysis):
•
both hits originate on the detector strings
2
CHIRKIN
et al.
ATMOSPHERIC NEUTRINOS WITH ICECUBE
1-cos(θ)
entries
10
-1
1
10
0
0.2
0.4
0.6
0.8
1
1.2
preliminary
Fig. 2. Zenith angle distribution of remaining data events in 275.5
days of IceCube data (black) comparison with atmospheric neutrino
prediction from simulation (red). Several double coincident air shower
muon events remain at this level in simulation (shown in green).
Vertically up-going tracks are at 0, horizontal tracks are at 1.
•
if both hits are on the same string they should not
be separated by more than
30
optical sensors
•
the strings of both hits must be within
500
meters
of each other
•
the
δt δr/c
must be less than
1000
ns.
At least
4
topologically-connected hits within
4
us
are required to form a topological triggered set, which
is then passed through the simple majority trigger. Just
like in the simple majority trigger, the hits not directly
connected to each other can belong to the same event
if they form topologically-connected sets satisfying the
multiplicity condition with at least one and the same hit
belonging to both sets.
The required distance between the strings (
500
me-
ters) was left intentionally high to allow easy scaling of
the present analysis to higher-string IceCube detector
configurations. Still, the rate of unrelated coincident
events is much reduced via the use of the topological
trigger. More importantly, the fraction of such events
after the topological trigger stays at the same low level
as the detector grows.
An alternative approach to recognize coincident events
by reconstructing them with double-muon hypothesis
was tried in a separate effort. In the present work
however it is believed that the topological trigger offers
several crucial advantages:
•
the separation of coincident events is performed at
the hit selection level
•
the method is faster as it does not require compli-
cated dual-muon fits
•
not only 2 but also 3 and more coincident events
can be separated
•
all of these are kept for the analysis (in the alter-
native approach coincident events are thrown out)
•
noise hits are cleaned very efficiently
•
the rate of unresolved coincident events and coin-
cident noise hits is kept at the same low level as
the detector grows.
The event selection resulted in 8548 events found in
275.5 days of data of IceCube (see the 22-string config-
neutrino signal
corsika showers
data, SBM step 1
data, SBM step 2
log
10
(E
reco
(COG)
[
GeV
])
entries
10
-2
10
-1
1
10
2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7
preliminary
Fig. 3. Reconstructed muon energy at the closest approach point to
the center-of-gravity of hits in the event. Data distribution is shown at
both steps 1 and 2 of the SBM event selection method [1]. After the
∼
90% purity level is reached in simulation (step 1) it is necessary to
remove more events from data that do not look like well-reconstructed
muons; this is achieved by comparing data events to simulated muon
neutrino events (step 2).
uration in Figure 1), or 31 events per day at
>∼
90%
estimated (from simulation) purity level (contaminated
by remaining atmospheric muon background). Compare
this to expectation from simulation of 29.0 atmospheric
neutrino events per day (Figure 2).
III. ATMOSPHERIC NEUTRINO SPECTRUM
UNFOLDING
Figure 3 compares the measured muon energy distri-
bution for conventional atmospheric neutrino simulation
and data at
>∼
90% purity level. The difference between
data at steps 1 and 2 of the SBM event selection is due to
the presence of events that were unlike those simulated.
Such events are removed at step 2 by comparing them
to the events in the atmospheric neutrino simulation [1].
At this time the difference between the two data curves
should be treated as a measure of (at least some of) the
systematic errors introduced by our simulation.
The uncertainty in our measurement of muon energy
is
∼
0.3 in
log
10
(E
µ
)
in a wide energy range (from 1
TeV to 100 PeV). A larger smearing, estimated from
neutrino simulation (based on [3]), is introduced when
matching the muon energy at the location of the detector
to the parent neutrino energy.
We tried a variety of unfolding techniques to obtain
the distribution of the parent muon neutrinos, including
the SVD [4] with regularization term that was the
second derivative of the unfolded statistical weight;
and iterative Bayesian unfolding [5] with a 5-point
spline fit smoothing function (with and without the
smearing kernel smoothing). Since we are looking for
deviations of the energy spectrum from the power law,
the SVD with regularization term that is the second
derivative of the log(flux) was selected as our method of
choice. Additionally, we chose to include the statistical
uncertainties of the unfolding matrix according to [6]
(using the equivalent number of events concept as in
[7]). The chosen method yielded the most consistent
description of spectrum deviations that were studied;
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
log
10
(Unfolded E
pri
[
TeV
])
Observed event counts
unfolded event counts of simulated dataset
0
500
1000
1500
2000
2500
3000
-1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4
preliminary
Fig. 4. Unfolded distribution of muon neutrino energies: the original
distribution modeled according to [11] (red), median and 90% band
of the unfolded result of 10000 simulated sets, drawn from the same
simulation (blue dots/lines and black boxes, respectively). A small bias
introduced by the regularization term shows up as a slight mismatch
between the original and unfolded median bin values. Also shown is
the distribution modeled according to [3] (green).
unfolded spectrum of simulated dataset
log
10
(Unfolded E
pri
[
TeV
])
(E/GeV)
2
⋅Φ
[
GeV
-1
cm
-2
s
-1
sr
-1
]
10
-9
10
-8
10
-7
10
-6
10
-5
10
-4
10
-1
1
10
10
2
10
3
preliminary
Fig. 5. Unfolded muon neutrino spectrum, averaged over zenith angle,
same color designations as in Figure 4. The green points of [3] form a
band as they are shown un-averaged, for each zenith angle separately.
also errors estimated from half-width of the likelihood
function were reasonable when compared to the spread
of unfolded results in a large pool of simulated data sets
(see Figures 4 and 5).
It is possible to study the effect of small charm
and
E
2
isotropic diffuse contributions (as the two
commonly studied deviations from the conventional neu-
trino flux models). Injecting known amounts of such
contributions into the simulated event sets one computes
the 90% confidence belt as in [8], [9], [10] (shown in
Figure 6 for statistical weight of events in one of the
bins of the unfolded distribution). The following table
summarizes the average upper limits for diffuse and
RQPM (optimistic) charm models (using conventional
neutrino flux description as in [11]):
flux
bin 8
bin 9
bin 10
bin 11
energies, TeV
46.4
100
215
1 10
PeV
E
2
,
10
8
·
5.48
3.00
3.00
4.06
RQPM (opt)
·
0.74
0.90
1.34
2.44
number of events in bin 10
signal strength
⋅
10
-8
(E/GeV)
2
⋅Φ
[
GeV
-1
cm
-2
s
-1
sr
-1
]
1
10
0
2
4
6
8 10 12 14 16 18 20
preliminary
Fig. 6.
90% connfidence belt for
E
2
isotropic diffuse flux
contribution, calculated with 10000 independent simulated sets for bin
10 (neutrino energies 215 TeV-1 PeV)
10
10
2
10
3
normalization
spectral index correction
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
preliminary
Fig. 7. Likelihood model testing profile for a simulated spectrum with
spectral index deviation of
+0.2
with respect to the reference model.
The 90% confidence belt (shown as red contour) is very narrow and
widens when systematical errors are taken into account.
IV. LIKELIHOOD MODEL TESTING
The likelihood model testing approach is well-suited
to testing the data for specific deviations from the
conventional flux model. This approach is based on the
likelihood ordering principle of [8] and is easy employ
when several deviations are tested for simultaneously
[12]. This has recently been used in the analysis of the
AMANDA data [13] and is also used in a similar study
presented in [14].
As an example, Figure 7 demonstrates the ability to
measure the deviation of the conventional flux in overall
normalization and spectral index (with 8548 neutrino
events in the absence of systematical errors). Figure 8
demonstrates the ability to discern simultaneous charm
and diffuse
E
2
contributions (assuming that the precise
4
CHIRKIN
et al.
ATMOSPHERIC NEUTRINOS WITH ICECUBE
10
20
30
40
50
60
diffuse
⋅
10
-8
(E/GeV)
2
⋅Φ [
GeV
-1
cm
-2
s
-1
sr
-1
]
ratio to RQPM(opt)
0
0.5
1
1.5
2
2.5
3
3.5
4
0
2
4
6
8
10
12
14
16
preliminary
Fig. 8. A 90% confidence belt for a simulated mixed contribution
of 2
·
RQPM (opt) charm expectation
+6 · 10
8
E
2
isotropic
(diffuse) component. This profile includes systematic errors on overall
normalization and spectral index of the conventional neutrino flux
(allowing them to vary freely).
5
10
15
20
25
30
35
diffuse
⋅
10
-8
(E/GeV)
2
⋅Φ [
GeV
-1
cm
-2
s
-1
sr
-1
]
ratio to RQPM(opt)
0
0.25
0.5
0.75
1
1.25
1.5
1.75
2
0
0.5
1
1.5
2
2.5
3
3.5
4
preliminary
Fig. 9. 90% confidence level upper limit contours shown (in green) for
11 independent simulated data sets (drawn from the same conventional
flux parent simulation according to [11]), the “median” upper limit
shown in red.
normalization and spectral index of the conventional flux
are also unknown). We estimate the median upper limits
set by this method on both charm and diffuse
E
2
components in Figure 9. We used the
χ
2
with 2 degrees
of freedom approximation to construct the confidence
belts; the true 90% levels are even tigher than this (by
factor
∼ 1.3 1.6
) due to high similarity of effects of
both components on the eventual event distribution.
V. MODEL REJECTION FACTOR
This is a method that optimizes the placement of a
cut on the energy observable to maximize sensitivity
to an interesting flux contribution, discussed in [15].
The model rejection factor (ratio of
µ
90
to number of
expected signal events for a given flux) computed from
curves shown in Figure 10 achieves its optimal value
with a cut of 224 TeV on the reconstructed muon energy.
The corresponding best average upper limit (sensitivity,
not including systematics) of
2.14 · 10
8
is achieved.
log
10
(E
μ,reco
[
TeV
])
cumulative counts above given E
μ
,reco
background (bartol)
E
-2
⋅
10
-8
signal
<μ
90
>(b)
0
2
4
6
8
10
12
14
16
18
20
2
2.2
2.4
2.6
2.8
3
3.2
3.4
preliminary
Fig. 10.
Cumulative number of
E
2
diffuse signal events shown
in red, number of atmospheric neutrino events shown in blue, the
corresponding average upper limit
µ
90
is shown in green.
VI. CONCLUSIONS
We present a selection of 8548 muon neutrino events
(with
∼<
10% estimated contamintation from the mis-
reconstructed air shower muon events) in 275.5 days of
IceCube-22 data. An unfolding technique is selected and
used to compute the average upper limit on diffuse and
charm contributions. We found that the likelihood model
testing and the model rejection factor methods both
achieve (not surprisingly) somewhat better sensitivities.
Since the study of systematic errors is (at the time
of writing of this report) not yet completed, the average
upper limits presented here do not contain systematic
error effects, and the actual upper limits (or the unfolded
spectrum) computed from the data are not yet shown.
REFERENCES
[1] D. Chirkin, et al.,
A new method for identifying neutrino events
in IceCube data
, These proceedings
[2] D. Chirkin, et al.,
Effect of the improved data acquisition system
of IceCube on its neutrino-detection capabilities
, 30th ICRC,
Merida, Mexico (arXiv:0711.0353)
[3] M. Honda, et al., Physical Review D, V70, 043008 (2004)
[4] V. Blobel,
An unfolding method for high energy physics ex-
periments
, Advanced statistical techniques in particle physics
conference, Durham, 2002
[5] G.D’Agostini,
A multidimensional unfolding method based on
Bayes’ theorem
, DESY 94-099 (1994)
[6] R. Barlow and Ch. Beeston,
Fitting using finite Monte Carlo
samples
, Computer Physics Communications 77 (1993) 219
[7] G. Zech,
Comparing statistical data to Monte Carlo simulation
parameter fitting and unfolding
, DESY 95-113 (1995)
[8] G. Feldman and R. Cousins, Physical Review D, V57, 3873
(1998)
[9] K. Mu¨nich, J. Lu¨nemann
Measurement of the atmospheric lepton
energy spectra with AMANDA-II
, 30th ICRC, Merida, Mexico
(arXiv:0711.0353)
[10] S. Gozzini,
Search for Prompt Neutrinos with AMANDA-II
, Ph.
D. thesis, Johannes Gutenberg Universita¨t Mainz, 2008
[11] G. Barr, et al., Physical Review D, V70, 023006 (2004)
[12] G. Hill, et al.,
Likelihood deconvolution of diffuse prompt and
extra-terrestrial neutrino fluxes in the AMANDA-II detector
, 30th
ICRC, Merida, Mexico (arXiv:0711.0353)
[13] R. Abbasi et al. (IceCube collaboration),
Determination
of the Atmospheric Neutrino Flux and Searches for New
Physics with AMANDA-II
, Accepted by Phys. Rev. D., 2009,
(arXiv:0902.0675)
[14] W. Huelsnintz and J. Kelley (IceCube collaboration),
Search
for quantum gravity with IceCube and high energy atmospheric
neutrinos
, these proceedings
[15] G. Hill and K. Rawlins, Astroparticle physics 19 (2003), 393
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
Atmospheric Neutrino Oscillation Measurements with IceCube
Carsten Rott
∗
(for the IceCube Collaboration
†
)
∗
Center for Cosmology and AstroParticle Physics, Ohio State University, Columbus, OH 43210, USA
†
See special section of these proceedings
Abstract. IceCube’s lowest energy threshold for the
detection of track like events (muon neutrinos) is
realized in vertical events, due to IceCube’s geome-
try. For this specific class of events, IceCube may
be able to observe muon neutrinos with energies
below 100 GeV at a statistically significant rate.
For these vertically up-going atmospheric neutrinos,
which travel a baseline length of the diameter of
the Earth, oscillation effects are expected to become
significant. We discuss the prospects of observing
atmospheric neutrino oscillations and sensitivity to
oscillation parameters based on a muon neutrino
disappearance measurement performed on IceCube
data with vertically up-going track-like events. We
further discuss future prospects of this measurement
and the impact of an IceCube string trigger con-
figuration that has been active since 2008 and was
specifically designed for the detection of these events.
Keywords: Neutrino Oscillations IceCube
I. INTRODUCTION
The IceCube Neutrino Telescope is currently under
construction at the South Pole and is about three quarters
completed [1]. Upon completion in 2011, it will instru-
ment a volume of approximately one cubic kilometer
utilizing 86 strings, each of which will contain 60 Digital
Optical Modules (DOMs). In total, 80 of these strings
will be arranged in a hexagonal pattern with an inter-
string spacing of about 125 m, and 17 m vertical sepa-
ration between DOMs at a depth between 1450 m and
2450 m. Complementing this 80 string baseline design
will be a deep and dense sub-array named DeepCore [2].
For this sub-array, six additional strings will be deployed
in the center, in between the regular strings, resulting
in an interstring-spacing of 72 m. DeepCore will be
densely instrumented in the deep ice below 2100 m, with
a vertical sensor spacing of 7 m. This array is specifically
designed for the detection and reconstruction of sub-TeV
neutrinos. Further, the deep ice provides better optical
properties and the usage of high quantum efficiency
photomultiplier tubes will enable us to study neutrinos
in the energy range of a few tens of GeV. This makes
DeepCore an ideal detector for the study of atmospheric
neutrino oscillations [2].
In this paper we present an atmospheric neutrino
oscillation analysis in progress on data collected with the
IceCube 22-string detector during 2007 and 2008. This
is an update on a previous report [4], with a larger, more
complete background simulation and hence re-optimized
selection criteria. An alternative background estimation
using the data itself is also discussed.
The goal of this analysis is to measure muon neutrino
(ν
µ
) disappearance as a function of energy for a constant
baseline length of the diameter of the Earth by study-
ing vertically up-going ν
µ
. Disappearance effects are
expected to become sizable at neutrino energies below
100 GeV in these vertical events. This energy range is
normally hard to access with IceCube. However, due
to IceCube’s vertical geometry, low noise rate, and low
trigger threshold the observation of neutrino oscillations
through ν
µ
disappearance seems feasible. Atmospheric
neutrino oscillations have, as of today, not been observed
with AMANDA or IceCube.
Based on preliminary selection criteria, we show that
IceCube has the potential to detect low-energy vertical
up-going ν
µ
events and we estimate the sensitivity to
oscillation parameters.
II. ATMOSPHERIC NEUTRINO OSCILLATIONS
Collisions of primary cosmic rays with nuclei in the
upper atmosphere produce a steady stream of muon
neutrinos from decays of secondaries (π
±
, K
±
). These
atmospheric neutrinos follow a steeply falling energy
spectrum of index γ ? 3.7.
In IceCube these muon neutrinos can be identified
through the observation of Cherenkov light from muons
produced in charged-current interactions of the neutrinos
with the Antarctic ice or the bedrock below. The main
difficulty in identifying these events stems from a large
down-going high energy atmospheric muon flux, that
could produce detector signatures consistent with those
produced by up-going muons. These events are the
background to this analysis.
(GeV) (L = 12715 km )
Muon Neutrino Energy E
ν
μ
20
40
60
80
100 120 140 160 180 200
)
μ
ν
→
μ
ν
Survival Prob. P(
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Vertical Muon Neutrino Survival Probability
ν
μ
Survival Probablility
2
eV
-3
Δ
m = 2.4
×
10
2
2
Θ
= 1.0
sin
1200 600 400
200
L/E
ν
μ
(km/GeV)
Fig. 1. Muon neutrino survival probability under the assumption of
effective 2-flavor neutrino oscillations ν
µ
↔ ν
τ
as function of energy
for vertically traversing neutrinos.
2
C. ROTT et al. ATM. OSCILLATION WITH ICECUBE
Vertically up-going atmospheric neutrinos travel a
distance of Earth diameter, which corresponds to a
baseline length L of 12, 715 km. The survival probability
for these muon neutrinos can be approximated using
the two-flavor neutrino oscillation case and is shown in
Figure 1 for maximal mixing and a ∆m
2
consistent with
Super-Kamiokande [6] and MINOS [7] measurements. It
illustrates the disappearance effect (large below energies
of 100 GeV) we intend to observe.
III. OSCILLATION ANALYSIS
To probe oscillation effects, our selection criteria need
to be optimized towards the selection of low-energy
vertical muon events. The selection should also retain
some events at higher energies (with no oscillation
effects), that could be used to verify the overall normal-
ization. Low energy vertical up-going muons in IceCube
predominantly result in registered signals (”hits”) on a
single string. The muon propagates very closely to one
string, such that the Cherenkov light can be sampled
well from even low-energy events. The probability of
observing hits on a second string is very small due to
the large interstring distance of 125 m, and is further
suppressed through a local trigger condition known as
HLC (Hard Local Coincidence). The HLC condition
requires that a DOM only registers a hit if a (nearest
or next-to-nearest) neighbor also registers a hit within
1 µs. IceCube was operational in this mode for the 22
and 40-string data.
Given the nature of the signal events, the oscillation
analysis can be performed very similarly on the different
IceCube string configurations. To verify our understand-
ing of the detector, we perform this analysis in steps.
First, we use a subset of the 22-string configuration to
develop and optimize the selection criteria, then cross
check them on the full 22-string dataset and perform
the analysis on the IceCube datasets acquired following
the 22-string configuration.
The IceCube 22-string configuration operated between
May 31, 2007 and April 5, 2008. In this initial study,
we analyze only a small subset of the data acquired over
this period with a total livetime of 12.85 days, using ran-
domly distributed data segments of up to 8 hour length
collected during the period of 22-string operations. The
dataset was triggered with the multiplicity eight DOM
trigger and then preselected by a specific analysis filter
running at the South Pole, selecting short track-like
single string events. The filter requires after removal of
potential noise hits, that all hits occur on a single string
and that the time difference between the earliest and
latest hit be less than 1000 ns. To partially veto down-
going muon background it requires no hits in the top
3 DOMs. Further, the hit time difference between at least
two adjacent DOMs must be consistent with the speed
of light within 25% tolerance, and the first DOM hit in
time needs to be near the bottom or top within the series
of DOMs hit on the single string. All filter selection
criteria are designed to be directionally independent,
so that vertical up-going events are collected as well
as vertical down-going. The described analysis only
uses the up-going sample collected by this filter. The
down-going sample could be used in the future for
flux normalization purposes, if we succeed in extracting
a pure atmospheric neutrino sample against the large
down-going atmospheric muon flux [3].
To isolate our signal sample of vertical up-going
ν
µ
events we apply a series of consecutive selection
criteria. We require that the majority of time differences
between adjacent DOMs are consistent with unscattered
Cherenkov radiation (direct light) off a vertically up-
going muon (L4). In addition, a maximum likelihood
fit is applied requiring the muon to be reconstructed as
up-going (L5). After these selection criteria, the dataset
is still dominated by down-going muon background
mimicking up-going events. This background is esti-
mated using two CORSIKA [8] samples: one with an
energy spectrum according to the Horandel¨
polygonato
model [5] and a second over-sampling at the high energy
range. Simulations agree well with data in shape, but
the normalization is found to be slightly high. Based on
background and signal simulations (atmospheric ν
µ
were
generated with ANIS [9]) we define a set of tight selec-
tion criteria (that do not correlate strongly) and show
good signal and background separation. These selection
criteria are as follows: Event time length greater than
400 ns (L6), mean charge per optical sensor larger than
1.5 photo-electrons (pe), total charge collected during
the first 500 ns larger than 12 pe (L7), and an inner string
condition (the trigger string completely surrounded by
neighboring strings) (L8). The tight selection criteria
were independently optimized at level 5 in order to have
high statistics and smoother distributions which would
not be available at higher selection levels. Thereafter,
we reject all events in the available background COR-
SIKA sample corresponding to an equivalent detector
livetime of at least two days, taking into account the
oversampling. Using a conservative approach with two
days of livetime equivalent we can set a 90%C.L. upper
limit on the possible background contamination in the
data sample of 14.8 events, in 12.85 days of livetime. In
this sample we further expect 2.13 ± 0.07 (1.68 ± 0.06)
signal events (with oscillation effects taken into account)
from atmospheric neutrinos. See Table I for event counts
as function of the selection criteria. Figure 2 shows the
track length distribution after final selection criteria. The
track length serves as an energy estimator working well
at the energy range of interest since a muon travels
roughly 5 m/GeV. As expected, short tracks show larger
disappearance effects. Figure 3 shows the fraction of
events selected by this analysis that are below a certain
muon energy for different track lengths.
The optimization and cross-check on the small sub-
set of available data have been performed in a blind
manner. One event was observed after final selection
which is consistent with the prediction. This initial
result indicates that we understand and model the low-
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
Track length (#DOMs)
6
8
10
12
14
16
18
20
22
Events
0
0.5
1
1.5
2
Atm. Nugen (no-osc)
2.13
Atm. Nugen (osc)
1.68
Data
1
Cut Level - 8 Track length (#DOMs) (L=12.85 days)
Fig. 2.
Expected track length of the signal, with and without
oscillations taken into account, and compared to data after final
selection criteria. .
Track length (#DOMs)
8
9 10 11 12 13 14 15 16 17
Fraction
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Tracklength vs. Muon Neutrino Energy at final selection cut level
< 50 GeV
Muon Neutrino Energy E
ν
μ
[50,100] GeV
Muon Neutrino Energy E
ν
μ
> 100 GeV
Muon Neutrino Energy E
ν
μ
Fig. 3. Fraction of events in a given muon neutrino energy range as
function of their track length defined by the number of DOMs hit at
final selection.
energy atmospheric neutrino region reasonably well. The
analysis on the full dataset is in progress, including
a larger background MC sample and a more detailed
study of systematic uncertainties. Figure 4 shows the
effective area for vertical up-going neutrinos in the 22-
string detector at filter level and final selection.
ν
/GeV)
log10(E
1
1.5
2
2.5
3
3.5
)
2
Effective Area (m
-9
10
-8
10
-7
10
-6
10
-5
10
-4
10
-3
10
Neutrino Effective Area - IceCube Preliminary
Filter Level
Final Cut Level
Fig. 4. Average muon neutrino effective area for vertical up-going
neutrinos (within 15 degree’s of vertical direction) as function of
neutrino energy.
IV. BACKGROUND ESTIMATION
The background has been estimated using CORSIKA
simulations. However, due to limited MC statistics there
remains a large uncertainty at final selection.
To cross-check the background estimation and to
provide a second independent way to obtain a back-
ground estimate, we use the data itself to determine the
remaining background.
Cut
Corsika
Sig. (with osc)
Effect
Data
L3
439 ± 2 · 10
4
20.3(17.3) ± 0.4
15%
331 · 10
4
L4
54 ± 2 · 10
3
20.0(17.0) ± 0.3
15%
32 · 10
3
L5
464 ± 175
11.8(9.7) ± 0.2
18%
321
L6
351 ± 171
10.7(8.8) ± 0.2
18%
207
L7
151 ± 41
9.6(7.9) ± 0.2
18%
145
L8
0
2.1(1.7) ± 0.08
21%
1
TABLE I
SUMMARY OF NUMBER OF EVENTS IN DATA AND AS PREDICTED BY
SIMULATIONS AS FUNCTION OF THE SELECTION CRITERIA “CUT”
LEVEL: L3 - INITIAL PROCESSING (TRIGGER, FILTER), L4/L5 -
RECONSTRUCTED TRACK IS VERTICAL UP-GOING, L6/L7 - CHARGE
BASED SELECTION CRITERIA, L8 - INNER STRINGS ONLY. SEE
TEXT FOR DETAILED DESCRIPTION OF THE SELECTION CRITERIA.
EFFECT REFERS TO THE SIZE OF THE DISAPPEARANCE EFFECT.
The nature of the signal events (low energy vertical
tracks on a single string) allows us to estimate the
background based on the completeness of the veto region
defined by the surrounding strings, using geometrical
phase-space arguments.
The total number of events observed is the sum of the
passing signal events and background faking a signal.
The two categories display very different behavior with
respect to tightening the selection criteria. Signal events
produce predominately real vertical tracks, so that the
rate on strings regardless of their position is very similar
(see Figure 5).
Adjacent strings
1
2
3
4
5
6
Events / 12.85 days
1
10
2
10
3
10
4
10
5
10
6
10
7
10
8
10
9
10
Background (Cut Level 3)
Background (Cut Level 4)
Background (Cut Level 5)
Vertical up Signal (Cut Level 4)
Fig. 5.
Number of events for 12.85 days of data at different cut
levels as function of number of adjacent strings. The signal prediction
is shown for comparison. Note that the number of adjacent strings does
not affect the signal as those events are predominately single string
events.
Up-going ν
µ
of higher energies and non-vertical ν
µ
have a small impact on the overall rates. As selection
criteria become more stringent, the rates on the strings
become more homogeneous as they are dominated by
“high quality” low-energy vertical muon neutrino events.
Background behaves very differently under tightening
selection criteria, as it becomes more difficult to produce
a fake up-going track when the parameter space is taken
away and the veto condition tends to have a larger
impact.
We determine the ratio between the average number
4
C. ROTT et al. ATM. OSCILLATION WITH ICECUBE
of events observed on a string with n adjacent strings
1
and those with n + 1. At a low selection level, the rate
on all strings is completely dominated by background.
At high selection level, strings having less than four
adjacent strings are also background dominated. We
use these first three bins to scale the ratio distributions
from an earlier selection level to the final selection
level. Figure 6 shows the predicted number of events
at next-to-final selection level (L7) obtained with this
method. The background estimation method from data
itself needs to be finalized, including a study of the
systematic uncertainites. It provides a cross-check to the
predictions from simulation and may ultimately be used
as the preferred background estimation method in this
analysis.
Number adjacent strings
1
2
3
4
5
6
Average number of events per string
0
10
20
30
40
50
60
Average Number of Events per String (L=12.85 days)
Corsika Background
Atm. Nugen + Corsika Background
Background prediction from data
Data
Fig. 6. Average number of events per string at next-to-final selection
level (L7) as function number of adjacent strings. Note that the right
most bin corresponds to the final selection.
V. DISCUSSION OF SENSITIVITY FOR 40-STRING
AND FULL ICECUBE
The IceCube 40-string dataset is in many ways su-
perior to the 22-string dataset. The trigger system has
been significantly improved over the 22-string detector
through the addition of a string trigger [10], roughly
doubling the vertical muon neutrino candidate events per
string. In order to reject efficiently against down-going
muon background, we require that a string be entirely
surrounded by adjacent strings (inner strings criterion)
as part of the final selection. The 40-string detector has
about a factor of three more inner strings.
Based on the selection criteria for the IceCube 22-
string analysis, we have evaluated the sensitivity of the
40-string detector with one year of data using a χ
2
-
test on the track length distribution. Selection criteria
are identical to those presented here, but the number of
expected signal events is scaled according to expectation
for the 40-string array. We expect about 400 signal
events, based on the detector livetime, number of inner
strings, and a factor two increase in number of events
1
We define adjacent strings as those that are within the nominal
interstring-distance (roughly 125 m) of the hexagonal detector pattern.
due to the string trigger. Figure 7 shows the expected
sensitivity limits obtained in this way as function of
the oscillation parameters. Systematic uncertainties are
still being investigated and are not included; They are
dominated by the atmospheric neutrino flux uncertainty,
optical module sensitivity and ice effects.
2
2Θ
23
sin
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
)
4
/c
2
(eV
2
m
Δ
0
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.008
0.009
0.01
-3
10
-2
10
-1
10
1
10
2
10
Expected IceCube 40-string Sensitivity (no background)
2
Δχ
95% C.L.
90% C.L.
1
σ
IceCube Preliminary
Fig. 7.
Expected constraints on oscillation parameters using the
IceCube detector in the 40-string configuration under the assumption
of zero background .
VI. CONCLUSIONS
Preliminary results obtained with a subset of the data
collected with the IceCube 22-string configuration active
during 2007 and 2008, suggest that IceCube may have
sensitivity in the energy range where atmospheric oscil-
lations become important. We estimate the sensitivity to
oscillation parameters in the IceCube 40-string dataset
and find that IceCube can potentially constrain them,
pending the determination of the systematic uncertainties
associated with the predicted distributions. Understand-
ing of this energy region is also important for dark matter
annihilation signals from the center of the Earth and
further provides the groundwork for DeepCore, which
will probe neutrinos at a similar and even lower energy
range [2].
REFERENCES
[1] A. Achterberg et al. [IceCube Collaboration], Astropart. Phys.
26, 155 (2006).
[2] D. Grant et al. [IceCube Collaboration], Fundamental Neutrino
Measurements with IceCube DeepCore, this proceedings.
[3] I. F. M. Albuquerque and G. F. Smoot, Phys. Rev. D 64, 053008
(2001).
[4] C. Rott [IceCube Collaboration], “Neutrino Oscillation Measure-
ments with IceCube,” arXiv:0810.3698.
[5] J. R. Horandel,¨
Astropart. Phys. 19 (2003) 193.
[6] Y. Ashie et al. [Super-Kamiokande Collaboration], Phys. Rev. D
71, 112005 (2005).
[7] D. G. Michael et al. [MINOS Collaboration], Phys. Rev. Lett.
97, 191801 (2006); P. Adamson et al. [MINOS Collaboration],
Phys. Rev. Lett. 101, 131802 (2008).
[8] D. Heck et al., Forschungszentrum Karlsruhe Report FZKA-
6019, 1998.
[9] A. Gazizov and M. P. Kowalski, Comput. Phys. Commun. 172,
203 (2005).
[10] A. Gross et al.[IceCube Collaboration], arXiv:0711.0353.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Direct Measurement of the Atmospheric Muon Energy Spectrum
with IceCube
Patrick Berghaus
∗
for the IceCube Collaboration
†
∗
University of Wisconsin, Madison, USA
†
see special section of these proceedings
Abstract
. Data from the IceCube detector in its
22-string configuration (IC22) were used to directly
measure the atmospheric energy spectrum near the
horizon. After passage through more than 10 km
of ice, muon bundles from air showers are reduced
to single muons, whose energy can be estimated
from the total number of photons registered in the
detector. The energy distribution obtained in this way
is sensitive to the cosmic ray composition around
the knee and is complementary to measurements by
air shower arrays. The method described extends
the physics potential of neutrino telescopes and can
easily be applied in similar detectors. Presented is
the result from the analysis of one month of IC22
data. The entire event sample will be unblinded once
systematic detector effects are fully understood.
Keywords
: atmospheric muons, CR composition,
neutrino detector
I. INTRODUCTION
While the primary goal of IceCube is the detection
of astrophysical neutrinos, it also provides unique op-
portunities for cosmic-ray physics [1]. One of the most
important is the direct measurement of the atmospheric
muon energy spectrum.
As shown in figure 1 the energy spectrum of muons
produced in cosmic-ray induced air showers has so far
been measured only up to an energy of about 70 TeV
[2]. The best agreement with theoretical models was
found by the LVD detector, with the highest data point
located at
E
µ
= 40 TeV
[3]. All these measurements
have been performed using underground detectors. Their
sensitivity was limited by the relatively small effective
volume compared to neutrino telescopes.
With a planned instrumented volume of one cubic
kilometer, IceCube will be able to register a substantial
amount of events even at very high energies, where the
flux becomes very low. The limitation in measuring the
muon spectrum is given by its high granularity, and
consequent inability to resolve individual muons. Most
air showers containing high energy muons will consist
of bundles with hundreds or even thousands of tracks.
Since the energy loss per unit length can be described by
the equation
dE/dx = a + bE
, low-energy muons will
contribute disproportionately to the total calorimetric de-
tector response, which depends strongly on the energy of
the primary, disfavoring the measurement of individual
muon energies.
10
-2
10
-1
10
3
10
4
10
5
3
2
1
KM + ATIC2
KM + GAMMA
KM + ZS
QGSJET II-03 + ZS
SIBYLL 2.1 + ZS
EPOS 1.61 + ZS
L3+Cosmic, 2004
CosmoALEPH, 2007
LVD, 1998
MSU, 1994
Baksan, 1992
Frejus, 1990
Artyomovsk, 1985
||||
MACRO best fit
E
μ
, GeV
E
3
D
μ
(E), cm
-2
s
-1
sr
-1
GeV
2
Fig. 1: Muon surface energy spectrum measurements
compared to theoretical models [2].
This problem can be resolved by taking advantage of
the fact that low energy muons are attenuated by energy
losses during passage through the ice. In this analysis,
the emphasis was therefore set on horizontal events,
where only the most energetic muons are still able to
penetrate the surrounding material. The primary cosmic
ray interaction in this region takes place at a higher
altitude, and therefore in thinner air. The reinteraction
probability for light mesons (pions and kaons) is smaller
and the flux of muons originating in their decays is
maximized.
The main possibilities for physics investigations using
the muon energy spectrum are:
•
Forward production of light mesons at high ener-
gies. While muon neutrinos at TeV energies mostly
come from the process
K → ν
µ
+ X
, for kine-
matical reasons muons originate predominantly in
pion decays
π → ν
µ
+ µ
[4]. An estimate of
the pion production cross section from accelerator
experiments gives an uncertainty of
δ(σ
Nπ
) ≃ 15% + 12.2% · log
10
(E
π
/500 GeV)
at
x
lab
> 0.1
above 500 GeV [5]. This value
should also apply in good approximation to the
conventional (non-prompt) muon flux.
•
Prompt flux from charm meson decay in air show-
ers [6]. Because of their short decay length, the
2
P. BERGHAUS
et al.
ICECUBE MUON ENERGY SPECTRUM
reinteraction probability for heavy quark hadrons
is negligible. The resulting muon energy spectrum
follows the primary energy spectrum with a power
law index of
γ ≈ 2.7
and is almost constant over
all zenith angles. Since the non-prompt muon flux
from lighter mesons is higher near the horizon, this
means that the relative contribution from charm is
lowest, and very challenging to detect.
•
Variations of the muon energy spectrum due to
changes of the CR composition around the knee.
Since the ratio of median parent cosmic ray and
muon energy is
≤ 10
at energies [7] above 1 TeV,
a steepening of the energy-per-nucleon spectrum of
cosmic rays at a few PeV will have a measurable ef-
fect on the atmospheric muon spectrum at energies
of hundreds of TeV. Comparison of the measured
muon spectrum to various phenomenological com-
position models was the main focus of this analysis.
An additional benefit in the case of neutrino detectors
is that a direct measurement of the muon flux will
have important implications for neutrino analyses. By
reducing the systematic uncertainties on atmospheric
lepton production beyond 100 TeV, the detection po-
tential for diffuse astrophysical fluxes will be enhanced.
Also, atmospheric muons serve as a “test beam” that
allows calibration of the detector response to high-
energy tracks.
II. COSMIC RAY COMPOSITION MODELS
Starting from the hypothesis that most cosmic rays
originate from Fermi acceleration in supernova shock
fronts within our galaxy, the change in the energy
spectrum can be explained by leaking of high energy
particles. Since the gyromagnetic radius
R =
p
eZB
≃ (10pc)
E
prim
[P eV ]
ZB[µG]
depends on the charge
Z
of the particle, for a given
energy nuclei of heavier elements are less likely to
escape the galactic magnetic field than lighter ones.
The general expression for the flux of primary nuclei
of charge
Z
and energy
E
0
is
dΦ
Z
dE
0
= Φ
0
Z
?
1+
?
E
0
E
trans
?
ǫ
c
?
∆γ
ǫc
where the transition energy
E
trans
corresponds to
E
ˆ
p
· Z
,
E
ˆ
p
· A
or simply
E
ˆ
p
for rigidity-dependent,
mass-dependent and constant composition models. The
parameter
ǫ
c
determines the smoothness of the transition,
and
∆γ
the change in the power law index.
Three alternative composition models have been pro-
posed, which all can be fit reasonabkly well to the total
cosmic ray flux in the region of the knee [8]. These are:
•
Rigidity-Dependent
∆γ
: This is the default com-
position used in the IceCube downgoing muon
simulation. It is also the one favored by current
E
μ
[GeV]
5
2×10
5
3×10
6
10
6
2×10
6
3×10
]
−1
PeV)
⋅
srad
⋅
sec
⋅
2
[(km
3
(E/1PeV)
⋅
μ
N
−7
10
−6
10
−5
10
Constant Composition
Mass−Dependent
Δγ
Rigidity−Dependent
Δγ
Fig. 2: Atmospheric muon energy spectrum at surface
level averaged over the whole sky as simulated with
CORSIKA/SIBYLL.
models of cosmic ray production and propagation
in the galactic amgnetic field.
•
Mass-Dependent
∆γ
: An alternative model that
also leads to a composition change around the knee.
The change in the power law index does not depend
on the charge, but on the mass of the nucleus. The
best fit proposed in the original paper leads to a
smaller value for the transition energy and a steeper
spectrum after the cutoff.
•
Constant Composition: Here, the composition of
the primary cosmic ray flux does not change. The
knee is explained by a common steepening in the
energy spectrum for all primaries occurring at the
same energy.
The best measurement of the composition so far was
done by KASCADE [9]. Its result was consistent with
a steepening of the spectrum of light elements, but
depended strongly on the hadronic interaction model
used to simulate the air showers (SIBYLL or QGSJET).
The influence of the three composition models on the
muon energy spectrum is shown in figure 2. While the
spectrum for the constant composition model gradually
changes from
E
3.7
to
E
4
, the other two show a
marked steepening corresponding to the cutoff in the
energy per primary nucleon. By accurately measuring
the muon energy spectrum, it is therefore possible to
significantly constrain the range of allowed cosmic ray
composition models in the knee region.
III. ANALYSIS
The data set used in this analysis is based on the
IceCube online muon filter, designed to contain all
track-like events originating from the region below
70
◦
.
It covers the period from June 2006 to March 2007
with an integrated livetime of 275.6 days, during which
IceCube was taking data with 22 strings (IC22). A
number of quality cuts were applied in order to eliminate
background from misreconstructed tracks and to reduce
the median error in the zenith angle measurement to
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
3
~e
surf
3.5
4
4.5
5
5.5
/GeV (true)
,max
μ
E
10
log
3
3.5
4
4.5
5
5.5
6
[Hz]
−8
10
−7
10
−6
10
−5
10
−4
10
−3
10
−2
10
Fig. 3: Relation between energy proxy
e˜
surf
and true
surface energy of most energetic muon in shower. Here
and in figure 4 the rigidity-dependent composition model
was used.
≈ 0.7
◦
. The final sample corresponded to an event rate
of 0.146 Hz.
θ
zen
[deg]
d
vert
[km]
d
slant
[km]
E
thr
µ
[TeV ]
0
1.5
1.5
0.28
70
1.5
4.39
1.12
70
2.5
7.31
2.59
85
1.5
17.21
22.1
85
2.5
28.68
207
TABLE I: Threshold energy for muons passing through
ice. The energy values correspond to an attenuation of
99.9%.
To measure the single muon energy spectrum, it is
necessary to reduce the background of high-multiplicity
bundles, whose total energy depends primarily on the
primary cosmic ray [10]. Since there is no possibility
to accurately estimate the multiplicity of a downgoing
muon bundle, the only way to obtain single muons is by
selecting a region close to the horizon to which muons
of lower energies cannot penetrate.
The minimum energy required for muons passing
through a distance
d
of ice can be approximated by the
equation
E
cut
(d) = (e
bd
1)a/b
where
a = 0.163GeVm
1
and
b = 0.192 · 10
3
m
1
[11]. The resulting threshold energies corresponding to
vertical tracks and for tracks at the top and bottom of
the detector for angles near the horizon are shown in
Table I.
Two factors determine the upper energy bound of
this analysis. One is the contribution from atmospheric
neutrinos, which will eventually dominate the event
sample at large depths. The other, and more important, is
the finite zenith angle resolution. Using simulated data, it
was determined that it effectively limits the measurement
of the slant depth to a values below 15 km.
N
μ,detector
1
2
3
4
5
6
7
8
9
10
fraction per bin
−2
10
10
−1
1
All
~e
surf
> 4.2
~e
surf
> 4.5
~e
surf
> 4.8
Fig. 4: Simulated muon multiplicity for atmospheric
showers at closest approach to the center of the InIce
detector for different values of
ǫ˜
.
μ
/GeV)
10
(E
log
3.5
4
4.5
5
5.5
6
6.5
fraction per bin
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
All Events
4.2<
~e
surf
<4.3
4.5<
~e
surf
<4.6
4.8<
~e
surf
<4.9
Fig. 5: True surface energy of most energetic muon in
shower using rigidity-depending composition model for
different values of
e˜
surf
, with fits to Gaussian function.
All individual distributions are normalized to unity.
Using the slant depth alone, the range of this analysis
is therefore insufficient to probe the region beyond 100
TeV. However, the reach can be extended by incorpo-
rating information about the energy of the muon as it
passes through the instrumented volume.
For muon tracks in the detector, the energy resolution
approaches
∆ log
10
(E) ≈ 0.3
above 10 TeV [12]. This
information can be combined with the slant depth to
obtain a better estimate for the muon energy at the
surface.
A natural way to do this is by defining a surface
energy proxy
e˜
that behaves as
exp(e˜
surf
) ∝ log n
γ
· d
slant
where
n
γ
represents the total number of photons
measured by the detector. Figure 3 shows the re-
sulting parameter, which has been linerly rescaled in
such a way that its value corresponds to the mean
log(E
µ,surf
/GeV )
for any given bin, provided that
the muon energy spectrum is reasonably close to the
standard
E
3.7
.
An important criterion for the applicability of the
4
P. BERGHAUS
et al.
ICECUBE MUON ENERGY SPECTRUM
~e
surf
3
3.5
4
4.5
5
5.5
events per bin [Hz]
−7
10
−6
10
−5
10
−4
10
−3
10
−2
10
Data
Constant Composition
Mass Dependent
Δγ
Rigidity Dependent
Δγ
Fit to Const. Comp.
Fig. 6: Data from one month of IC22 at final cut level
compared to simulated event rates and fit of empirical
function
exp(a + be˜ + ce˜
2
)
to constant composition
distribution.
~e
surf
3
3.5
4
4.5
5
5.5
Ratio Data/Simulation
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Constant Composition
Mass Dependent
Δγ
Rigidity Dependent
Δγ
Fig. 7: Ratio of experimental to simulated
e˜
surf
distri-
butions for one month of IC22 data. Uncertainties are
statistical only and exclude systematic detector effects.
energy proxy parameter is that over the entire range of
measurement the muon multiplicity remains low, and the
influence of high-multiplicity bundles small. Figure 4
confirms that this is indeed the case. It should be noted
here that the most energetic muon typically accounts
for the dominant contribution to the total energy in the
detector, such that other tracks in the bundle can be
neglected.
The spread in muon surface energies for a given value
of
e˜
surf
is shown in Figure 5. Around the peak the
distributions can be approximated by a Gaussian whose
width lies in the range of
∆log
10
E ≈ 0.3 0.4
.
IV. RESULT
Figure 6 shows simulated event rates in dependence
of
e˜
surf
compared to data at final cut level. Almost over
the entire range all three models can be approximated
by the same empirical fit function. Only in the highest
bin can a distinction be made.
The experimental data agrees remarkably well with
the simulation, as can be seen more clearly in Figure
7. Despite the steeply falling distribution, the ratio of
data to simulation remains very close to one over almost
the entire range. For
e˜ > 5
, corresponding to
E
µ
>
100 TeV
, the measurent is based on only three data
events.
Using the entire year of IC22 data, the predicted event
yield for
5.1 < e˜
surf
< 5.2
based on the constant
composition model corresponds to about 10 events. It
is therefore unlikely, even neglecting systematic uncer-
tainties, that any of the three models under considera-
tion could definitely be excluded yet. This situation is
expected to change as soon as 40-string data can be
included in the analysis.
V. CONCLUSION
This result demonstrates the potential for an accurate
measurement of the muon energy spectrum with large
neutrino detectors. So far only one month of data has
been considered in the analysis, corresponding to about
10% of the entire event sample. Nevertheless, the mea-
surement already covers an energy range almost a factor
of three above that of the previous upper limit, with very
good agreement between data and simulation.
While it will be difficult to make a definitive statement
about the cosmic ray composition around the knee based
on IC22 data, it will be possible to confirm the validity
of cosmic ray air shower models up to previously
inaccessible energy ranges.
At the time of writing, the instrumented volume
of the detector has increased by a almost factor of
three. Further enlargements are scheduled for the next
few years. Future measurements of the muon energy
spectrum will benefit from a larger effective area, and a
substantial improvement in the angular resolution related
to the longer lever arm for horizontal muon tracks within
the detector.
Once residual systematic detector uncertainties are
resolved, a comprehensive analysis that accounts for
both the energy spectrum of individual muons and the
total shower energy in the detector will be feasible. The
potential for such a combined measurement is unique to
large volume detectors.
REFERENCES
[1] P. Berghaus [IceCube Collaboration],
Proc. of ISVHECRI 2008
,
arXiv:0902.0021 [astro-ph.HE].
[2] A. A. Kochanov, T. S. Sinegovskaya and S. I. Sinegovsky,
Astropart. Phys.
30
(2008) 219
[3] M. Aglietta
et al.
[LVD Collaboration], Phys. Rev. D
60
(1999)
112001
[4] T. K. Gaisser, Nucl. Phys. Proc. Suppl.
118
(2003) 109.
[5] G. D. Barr, T. K. Gaisser, S. Robbins and T. Stanev, Phys. Rev.
D
74
(2006) 094009.
[6] G. Gelmini, P. Gondolo and G. Varieschi, Phys. Rev. D
67
(2003)
017301.
[7] T.K. Gaisser, “Cosmic Ray and Particle Physics,” Cambridge
University Press, 1990.
[8] J. R. Ho¨randel, Astropart. Phys.
19
, 193 (2003).
[9] T. Antoni
et al.
[The KASCADE Collaboration], Astropart. Phys.
24
, 1 (2005).
[10] D. Chirkin [AMANDA Collaboration],
Proc. of ICRC 2003
.
[11] D. Chirkin and W. Rhode, arXiv:hep-ph/0407075.
[12] J. Zornoza, D. Chirkin [IceCube Coll.], ICRC 2007,
arXiv:0711.0353 (pp. 63-66).
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
Search for Diffuse High Energy Neutrinos with IceCube
Kotoyo Hoshina
∗
for the IceCube collaboration
†
∗
Department of Physics, University of Wisconsin, Madison, WI 53706, USA
†
See the special section of these proceedings
Abstract. We performed a search for diffuse high
energy neutrinos using data obtained with the Ice-
Cube 22 string detector during a period 2007-2008.
In this analysis we used an E
−2
spectrum as a typical
flux resulting from cosmic ray shock acceleration. Us-
ing a likelihood track reconstruction, approximately
5700 track-like neutrinos are extracted from 275.7
days data at an estimated 95% purity level. The
expected sensitivities obtained are in a range of
2.2×10
−8
∼ 2.6×10
−8
E
−2
GeV cm
−2
s
−1
sr
−1
with
four different energy estimators. The analysis method
and results are presented along with discussions of
systematics.
Keywords: IceCube neutrino diffuse
I. INTRODUCTION AND DETECTION PRINCIPLE
The IceCube neutrino observatory is the world’s
largest neutrino telescope under construction at the ge-
ographic South Pole. During 2007, it collected data
with 1320 digital optical modules(DOM) attached to 22
strings (with 60 optical modules per string). They are
deployed in clear glacial ice at depths between 1450
to 2450 meters beneath the surface, where the photon
scattering and absorption are known by preceding in situ
measurements [1]. When a neutrino interacts inside or
close to the IceCube detector, DOMs capture Cherenkov
photons from secondary charged particles with 10 inch
photomultiplier tubes and generate digital waveforms. In
most cases, we require at least 8 DOMs to be triggered
within a 10 micro second time window. Once the trigger
condition is satisfied, all digital waveforms are collected
and then processed by online filtering programs to
filter out background events. In this analysis we used
275.7 days livetime of data and obtained 5718 candidate
neutrino induced events after the final event selection.
The event selection process is described in Section II.
The event sample after the selection process mainly
consists of atmospheric neutrinos. To separate extrater-
restrial high-energy neutrinos from atmospheric neutri-
nos, one can apply two types of analysis techniques.
The first is a point source analysis that uses the di-
rection of the neutrinos to survey high-density event
spots (hotspots). The second, called a diffuse analysis,
examines the energy spectrum itself and compares it to
various physics models. Since the diffuse analysis does
not require multiple events from an astrophysical source,
it is possible to take into account faint sources that are
not significant by themselves in a point source analysis.
However, in general, a diffuse analysis requires a better
detector simulation. While a point source analysis uses
data to search for a hotspot, the diffuse analysis has
to rely on simulated parameter distributions under an
assumption of a physics model to test observed distri-
butions in the data.
In this analysis we assumed a Φ ∝ E
−2
energy spec-
trum for neutrinos from astrophysical sources result-
ing from shock acceleration processes [2]. Since the
atmospheric neutrino flux has a much softer energy
spectrum [3][4][5], the signal neutrinos may form a
high-energy tail in an energy-related observable over
atmospheric neutrinos. The search for an extraterrestrial
neutrino component uses the number of events above
an energy estimator cut after subtracting a calculated
contribution from atmospheric neutrinos. The cut was
optimized to produce the best limit setting sensitivity [6].
Results and possible sources of systematics errors are
discussed in Section IV.
II. EVENT SELECTION
Cosmic ray interactions in the atmosphere create
pions, kaons and charmed hadrons which can later decay
into muons and neutrinos. The primary background
before the event selection is atmospheric muons travel-
ing downward through the ice. Their intensity strongly
depends on the zenith angle of the muon: it decreases
as the zenith angle increases because a higher zenith
angle results in a longer path length from the surface
of the Earth to the IceCube detector. The largest zenith
angle of atmospheric muons is around 85 degrees and
their path length inside the Earth is over 20 km. The
first filter is thus designed to select only upward going
events. For estimation of the zenith angle, we used a log
likelihood reconstruction. In this analysis, the minimum
zenith threshold is 90 degrees.
After the zenith angle filter is applied, the remaining
data still contains many orders of magnitude more mis-
reconstructed background than neutrino-induced events.
They are downward going muons, but reconstructed
as upward because of poor event quality (low num-
ber of triggered DOM, grazing an edge of detector,
etc) or two muons that passed through the detector
within a trigger time window (coincidence muons) and
mis-reconstructed as a single upward going muon
1
.
These mis-reconstructed events are effectively rejected
by checking fit quality parameters [7]:
1
This difficulty is mainly caused by scattering of photons in ice.
The effective scattering length of Cherenkov photons in IceCube is
around 30 m [1].
2
K.HOSHINA et al. SEARCH FOR DIFFUSE NEUTRINOS WITH ICECUBE
NCh
0 20 40 60 80 100 120 140160 180 200
10
-1
1
10
10
2
10
3
Data (intg 5718)
Coincidence muons (intg 37.0846)
Corsika single muons (intg 19.7651)
Atmospheric neutrinos (intg 5440.27)
10
-7
E
-2
neutrinos (intg 195.185)
(a) NCh
Cos(Zenith)
-1
-0.8 -0.6
-0.4 -0.2
0
0.2
10
10
2
Data (intg 5718)
Coincidence muons (intg 37.0846)
Corsika single muons (intg 19.7651)
Atmospheric neutrinos (intg 5440.56)
10
-7
E
-2
neutrinos (intg 200.232)
(b) cosine zenith
COGZ
-500-400-300-200-100 0 100 200300 400 500
1
10
10
2
Data (intg 5718)
Coincidence muons (intg 37.0846)
Corsika single muons (intg 19.7651)
Atmospheric neutrinos (intg 5440.56)
10
-7
E
-2
neutrinos (intg 200.231)
(c) COGZ
loglBayes32 - loglSPE32
-20
0
20
40
60
80
100
1
10
10
2
10
3
Data (intg 5661)
Coincidence muons (intg 37.0846)
Corsika single muons (intg 19.7651)
Atmospheric neutrinos (intg 5410.45)
10
-7
E
-2
neutrinos (intg 183.312)
(d) log likelihood ratio
log
10
(dEdX [GeV])
-2
-1
0
1
2
3
4
5
10
-1
1
10
10
2
10
3
Data (intg 5718)
Coincidence muons (intg 37.0846)
Corsika single muons (intg 19.7651)
Atmospheric neutrinos (intg 5440.09)
10
-7
E
-2
neutrinos (intg 200.749)
(e) log10(dEdX)
E [GeV]
10
log
1
2
3
4
5
6
7
2
m
-3
10
-2
10
-1
10
1
10
2
10
3
10
4
10
Average(90-180deg)
90 < zen < 110
110 < zen < 145
145 < zen < 180
(f) Effective Area
Fig. 1. (a - d) : Comparison of simulations and data for basic parameters after the purification process. COGZ(c) is the z-position of the
center of charge-gravity of an event in the IceCube coordinate system. (d) : An alternative energy estimator. See Sec. III. (f) : Effective area
of ν
µ
+ ν
µ
after the event selection, in several zenith angle ranges.
• Number of direct hits (NDir) : number of hits which
are assumed to result mostly from unscattered
Cherenkov photons
• Projected length of direct hits (LDir) : Largest
distance of a pair of projections from direct hit
positions to a reconstructed track
• Reduced log likelihood : log likelihood result of a
reconstructed track divided by number of degrees
of freedom
• log likelihood ratio : difference of log likelihood
parameters between a fit and a Bayesian fit which
is forced to reconstruct as downward going
• smoothness of hits : a parameter for how hits are
generated smoothly along a reconstructed track
• log likelihood ratio between single muon fit and
Bayesian weighted double muon fit : similar param-
eter as log likelihood ratio, but uses two Bayesian
fits as a hypothesis of coincidence muons
The “direct hits” are defined by the arrival times of
photons at each DOM and a reconstruction. Once a
reconstruction is determined, at each DOM, we obtain
a minimum path and earliest possible arrival times of
photons (geometrical hit times) from the Cherenkov light
emission point. Some photons may take a longer path
because of scattering, which result in a time delay from
the geometrical hit time. In this analysis, we chose a
time window of [-15ns, 75ns] from the geometrical hit
time to accept a hit as a direct hit.
The log likelihood ratio gives a comparison between
two fits, a standard likelihood fit and a fit with a zenith-
dependent weight which follows a zenith distribution of
atmospheric muons. A reliable good quality fit should
have a large ratio, while mis-reconstructed atmospheric
muons have relatively smaller ratios.
With these quality parameters, we defined a set of
cut parameters to purify neutrino-induced events using
Monte-Carlo simulation. For atmospheric muons, we
generated 10 days of single unweighted CORSIKA
muons, 5 × 10
5
events of energy weighted CORSIKA
muons
2
, and 7.4 days of unweighted CORSIKA coin-
cidence muons. For atmospheric neutrinos, 2.6 × 10
7
ν
µ
events were generated with an E
−1
spectrum and
re-weighted with a conventional atmospheric neutrino
flux [3] plus a prompt neutrino model [4][5]. The
optimal cut is chosen to retain as many high energy
neutrinos as possible while keeping purity of neutrinos
above 95 %.
The optimized cut parameter is then applied to data
and compared with Monte-Carlo predictions. In order
not to bias the analysis, the highest energy tails of
both data and simulation were kept hidden from the
analyzer during this final optimization process of cuts.
The number of DOMs that has at least one hit (NCh)
is used to determine the open window: we compared
events which NCh less than 80. Small discrepancies
2
The power law index of the primary particle is changed to be harder
by +1. The effective livetime varies in each primary energy bin, for
example, 10 TeV weighted muons correspond to one year of effective
livetime. The effective livetime also depends on zenith, e.g. a value of
a year for muons around 70 degree.
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
between data and simulation around the threshold of
some quality parameters were observed, mainly because
of insufficient statistics of background coincident muon
simulation. These events are removed by tightening the
cut parameters moderately.
Figs. 1 shows the comparison of data and simulation
after the final event selection. The Monte Carlo simu-
lation reproduces data well in most event variables, but
discrepancies are still present in some depth dependent
variables like COGZ, the z (vertical, or depth) coordinate
of the center-of-gravity of the charge in the event (z
= 0 in the center of the detector). This systematics is
discussed in Sec. IV. The neutrino effective area of ν
µ
+ ν
µ
after optimal quality cuts for 275.7 days of livetime
of IceCube 22 strings is shown in Fig. 1f.
III. ENERGY ESTIMATORS AND SENSITIVITY
Unlike the previous detector AMANDA, the IceCube
detector retains the original waveform by digitizing ana-
log waveforms inside the DOM. This technology allows
us to use charge information as an energy estimator.
Recently, new techniques for energy reconstruction were
developed using the charge information as well as the
hit times. In this section, we compare the sensitivity of
following energy estimators.
• NCh : number of triggered DOMs. It is simple, but
has a relatively strong connection with the track
geometry and the ice layers where the muon passed
through.
• NPe : Total charge collected by all triggered DOMs
of an event. Basically it is similar to NCh, but has
a larger and smoother dynamic range than NCh.
• dEdx : A table based energy reconstruction. Using
a table generated by a photon propagation program
(Photonics [8]), it estimates the energy deposit
along a reconstructed track. The reconstruction
takes into account the ice properties as a function
of depth. [9]
• MuE : a simple energy reconstruction. Similar to
dEdx, but uses an homogeneous ice model instead
of layered ice photonics tables. [10]
To obtain sensitivities, we assumed no extra-terrestrial
signal over a given energy threshold, then calculated
the expected upper limit using the Feldman-Cousins
method [11]. The Model Rejection Factor [6] is then
optimized to have the best sensitivity for E
−2
test signal
flux. Table I shows sensitivities at corresponding energy
estimator thresholds. The average number of background
neutrinos and Φ = 10
−7
E
−2
signal neutrinos above the
threshold are also predicted.
IV. RESULTS AND DISCUSSION
Table I also lists the number of data events above
the optimized energy thresholds for the four energy es-
timators. We observed a statistically significant excess of
data over the atmospheric neutrino prediction (including
prompt atmospheric neutrinos) for all energy estimators
except for dEdx. However, disagreements between data
and simulation in depth dependences (for example, in
COGZ in Fig. 1c) point to unresolved systematics in
our simulation. In this section we discuss the effect of
the COGZ problem to this analysis.
The depth dependences in the optical properties of the
glacial ice, reflecting changes in dust concentration due
to climate variations when the ice was formed [1], are
taken into account in the detector simulation. However,
as Fig. 1c shows, these dependences are not fully repro-
duced by the simulation. In this analysis, the discrepancy
is most severe in the deep part of the detector, for COGZ
< -250 m, which is also where most of the highest-
energy events lie. The event excess we observed thus
could be due to systematics rather than a signal flux.
To test the hypothesis that the excess is due to
inaccuracies in our simulation of depth dependences, we
repeated the analysis on data from the shallow (COGZ
> 0 m) part of the detector and from the deep (COGZ <
0 m) part separately. Fig. 1 shows the COGZ distribution
as a function of cosine zenith for the data, atmospheric
neutrino simulation, and a subtraction of the simulation
from data. To eliminate any bias from hard components
like prompt neutrinos or extra-terrestrial neutrinos, we
set an additional energy cut NCh < 50 to plot Fig.
1. Fig. 2c indicates that the systematic problems are
not specific to the highest energy events. Using events
with COGZ > 0 m and cosine zenith less than -0.2, the
data and simulation agrees relatively well. We performed
the same procedures on the full dataset and no data
excess is observed in any of the energy estimators. This
result could be compared with the AMANDA diffuse
analysis [12] because the majority of hits are recorded
by DOMs at depths where AMANDA is deployed.
Considering the sensitivities listed in Table II, this result
is consistent with the current upper limit for diffuse
muon neutrinos 7.4×10
−8
GeV cm
−2
s
−1
sr
−1
. On the
other hand, at COGZ < 0 m with the same zenith cut, we
observed an event excess with three energy estimators.
Since the sensitivities of the lower COGZ sample are
worse than the upper COGZ events, the event excess
we observed with the full data set is highly likely due
to systematics. Table II summarizes all numbers obtained
from the two subsets.
Some of the systematics issues will be resolved with
ongoing calibration studies. Our description of the op-
tical ice properties has larger uncertainties in the deep
ice, where we so far have relied on extrapolations of
the AMANDA measurements in the shallower ice [1],
using measurements of dust concentration in Antarctic
ice cores for the extrapolation. The ice core data indicate
a strong improvement in ice clarity below AMANDA
depths, with an estimated increase in average scatter-
ing and absorption lengths of up to 40% at depths
greater than 2100 m. With such different ice properties
in the two parts of the detector, we are investigating
our possibly increased sensitivity to systematic error
sources that are present at AMANDA depths but become
more significant in the deeper, clearer ice. We are also
4
K.HOSHINA et al. SEARCH FOR DIFFUSE NEUTRINOS WITH ICECUBE
TABLE I
SENSITIVITIES OF ICECUBE 22 STRINGS 275.7 DAYS WITH VARIOUS ENERGY ESTIMATORS. NO SYSTEMATICS ERROR INCLUDED.
Estimator
MRF (sensitivity)
Energy Threshold
Mean Background
Mean Signal
Data observed
NCh
0.22 (2.2 × 10
−8
E
−2
)
NCh ≥ 99
9.3
29.4
22
NPe
0.26 (2.6 × 10
−8
E
−2
)
log10(NPe) ≥ 3.15
6.6
22.5
10
dEdx
0.25 (2.5 × 10
−8
E
−2
)
log10(dEdx) ≥ 1.4
4.1
19.8
4
MuE
0.24 (2.4 × 10
−8
E
−2
) log10(MuE) ≥ 5.05
6.4
28.4
13
TABLE II
SENSITIVITIES OF ICECUBE 22 STRINGS 275.7 DAYS WITH ADDITIONAL COGZ CUT AND COSINE ZENITH CUT (COSθ < -0.2). NO
SYSTEMATICS ERROR INCLUDED.
Estimator
COGZ cut
MRF (sensitivity)
Energy Threshold
Mean Background
Mean Signal
Data observed
NCh
COGZ>0
0.41 (4.1 × 10
−8
E
−2
)
NCh ≥ 68
7.9
15.0
3
NPe
COGZ>0
0.54 (5.4 × 10
−8
E
−2
)
log10(NPe) ≥ 2.85
8.0
11.3
5
dEdx
COGZ>0
0.50 (5.0 × 10
−8
E
−2
) log10(dEdx) ≥ 0.97
7.9
12.2
5
MuE
COGZ>0
0.50 (5.0 × 10
−8
E
−2
)
log10(MuE) ≥ 4.65
9.9
13.2
7
NCh
COGZ<0
0.47 (4.7 × 10
−8
E
−2
)
NCh ≥ 80
12.8
15.7
25
NPe
COGZ<0
0.64 (6.4 × 10
−8
E
−2
)
log10(NPe) ≥ 3.15
2.4
6.4
4
dEdx
COGZ<0
0.58 (5.8 × 10
−8
E
−2
) log10(dEdx) ≥ 0.91
15.5
14.0
14
MuE
COGZ<0
0.62 (6.2 × 10
−8
E
−2
)
log10(MuE) ≥ 5.00
2.9
7.1
6
cosine zenith
-1 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0
COGZ
-500
-400
-300
-200
-100
0
100
200
300
400
500
20
40
60
80
100
120
140
160
180
200
(a) Data
cosine zenith
-1 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0
COGZ
-500
-400
-300
-200
-100
0
100
200
300
400
500
20
40
60
80
100
120
140
160
180
200
(b) Atmospheric ν
cosine zenith
-1 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0
)
ν
data-simulation(Atms.
-500
-400
-300
-200
-100
0
100
200
300
400
500
0
50
100
150
200
(c) Data - Atmospheric ν
Fig. 2. (a,b) : Number of Low NCh events at final cut level in COGZ vs cosine zenith. Events contributing to the plot are limited to NCh <
50. (c): Subtraction of plots (a) and (b). The boxes checked with x represent negative values.
improving our photon propagation simulation to better
reproduce the data in the clearest ice. This improved sim-
ulation will be tested with data from in-situ light sources
(LED flashers, nitrogen lasers) and well-reconstructed
downward going muons.
Among the four energy estimators, dEdx shows the
most stable results. However, all the systematic prob-
lems must be understood before we proceed to claim
a physics result. The IceCube 22 string configuration is
the first detector which allows a detailed study of Monte-
Carlo simulation and the detector in the deep ice with
reasonable statistics. These results will be essential not
only for this analysis, but also for upcoming analysis
with the IceCube 40 string configuration.
V. CONCLUSION
Using 275.7 days of upward going muon events
collected by the IceCube 22 string configuration, we
performed a search for a diffuse flux of high energy ex-
traterrestrial muon neutrinos. The expected sensitivities
are around 2.5×10
−8
GeV cm
−2
s
−1
sr
−1
for an E
−2
flux using four different energy estimators. We observed
an excess of data over that expected from background
above the best energy cut with some energy estimators.
In order to test the geometric stability of this analysis, we
performed the same analysis using two subsets of data
divided by a threshold COGZ = 0 m. Having inconsistent
results between these two subsets, the data excess we
observed is highly likely dominated by systematics. With
events at COGZ > 0 m, we observed no data excess with
any of the energy estimators, which is consistent with the
current upper limit on a diffuse flux of muon neutrinos
obtained by the AMANDA diffuse analysis [12]. Many
ongoing calibration studies will reveal the unknown
systematics in the near future.
REFERENCES
[1] M Ackermann, et al. Journal of Geophysical Research, Vol. 111,
D13203 (2006).
[2] E. Waxman and J. Bahcall, Phys. Rev. D 59, 023002 (1998).
[3] G.D. Barr, T.K. Gaisser, S. Robbins, and T. Stanev, Phys. Rev.
D 74, 094009 (2006).
[4] G. Firoentini, A. Naumov, and F.L Villante, Phys. Lett. B 510,
173 (2001).
[5] E.V. Bugaev et al. , Il Nuovo Cimento 12C, No. 1, 41 (1989).
[6] G.C. Hill and K. Rawlins, Astropart. Phys. 19, 393 (2003).
[7] J. Ahrens et al. , Nucl. Inst. Meth. A 524, 169 (2004)
[8] Lundberg, J.; Miocinovic, P. Woschnagg, K. et al. Nucl. Inst.
Meth. A 581, 619 (2007).
[9] S. Grullon et al. ”Reconstruction of high energy muon events in
icecube using waveforms,” in Proc. of 30th ICRC (2007).
[10] J.D. Zornoza, D. Chirkin, ”Muon energy reconstruction and
atmospheric neutrino spectrum unfolding with the IceCube de-
tector”, in Proc. of 30th ICRC (2007).
[11] G. Feldman and R. Cousins Phys. Rev. D 57 3873 (1998), G. C.
Hill Phys. Rev. D 118101 (2003).
[12] A. Achterberg et al., Phys. Rev. D, 76, 042008 (2007).
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
A Search For Atmospheric Neutrino-Induced Cascades with
IceCube
Michelangelo D’Agostino
∗
for the IceCube Collaboration
†
∗
Department of Physics, University of California, Berkeley, CA 94720, USA
†
See the special section of these proceedings.
Abstract
. The IceCube detector is an all-flavor
neutrino telescope. For several years IceCube has
been detecting muon tracks from charged-current
muon neutrino interactions in ice. However, IceCube
has yet to observe the electromagnetic or hadronic
particle showers or “cascades” initiated by charged-
or neutral-current neutrino interactions. The first
detection of such an event signature will likely come
from the known flux of atmospheric electron and
muon neutrinos. A search for atmospheric neutrino-
induced cascades was performed using a full year
of IceCube data. Reconstruction and background
rejection techniques were developed to reach, for the
first time, an expected signal-to-background ratio
∼
1
or better.
Keywords
: atmospheric, neutrino, IceCube
IceCube is a cubic kilometer neutrino telescope cur-
rently under construction at the geographical South Pole.
With 59 of 86 strings of photomultiplier tubes currently
embedded into Antarctica’s deep glacial ice, IceCube is
already the world’s largest neutrino detector [1].
IceCube detects high energy neturinos by observing
Cherenkov light from the secondary particles produced
in neutrino interactions in ice. In charged-current
ν
µ
interactions, the outgoing energetic muon emits light
along its track through the detector. A hadronic particle
shower or cascade is also produced at the neutrino
interaction vertex, but this is usually well outside of
the instrumented detector volume. In charged-current
ν
e
interactions, the outgoing electron initiates an electro-
magnetic (EM) cascade which accompanies the hadronic
cascade. Neutral-current interactions of any neutrino
flavor produce hadronic cascades.
At the energies relevant for atmospheric neutrinos,
both hadronic and EM cascades develop over lengths of
only a few meters. In a sparsely instrumented detector
like IceCube, they look like point sources of Cherenkov
light whose spherical wavefronts expand out into the
detector. While muon tracks have been detected by
neutrino telescopes, cascade detection has remained an
elusive goal for high energy neutrino astrophysics.
The well-studied atmospheric neutrino flux can serve
as a calibration source for the cascade detection channel
and should provide a valuable proof-of-principle for all-
flavor detection. Once neutrino-induced cascades have
been detected from the atmosphere, they should also
open up a powerful channel for astrophysics analysis.
Cut
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
N
-2
10
-1
10
1
10
2
10
3
10
4
10
Atm.
ν
e
+
ν
μ
Cascades (MC)
Data (10%)
IceCube Preliminary
Final Cut Variable Cumulative Distribution
Fig. 1. Number of surviving events as a function of final cut strength
for signal Monte Carlo and a 10% sample of the full one year dataset.
Since cascades are topologically distinct from muons,
they can be separated from the cosmic ray background
over the entire
4π
of the sky [2].
The challenge of separating a cascade signal from
the overwhelming background of downgoing air-shower
muons is significant. In its 22 string configuration,
∼10
billion events triggered the IceCube detector in one year
of operation. Of these, only
∼10
,000 are expected to
be atmospheric neutrino-induced cascades. Because the
atmospheric
ν
µ
and
ν
e
fluxes differ [3], these
∼10
,000
events are unequally distributed among the different
cascade signal classes. For each
ν
e
, we expect
∼1
.3
ν
µ
neutral-current events and
∼2
.9
ν
µ
charged-current
events where the hadronic cascade from the interaction
vertex is inside the detector (so-called “starting events”).
To begin the analysis, a fast filter was developed
to run online at the South Pole to select promising
candidate events for satellite transmission to the northern
hemisphere. The filter selected events with a spheri-
cal topology that were not good fits to relativistically
moving tracks. After this online filter, each event was
reconstructed according to track and cascade hypotheses
using hit timing information, and well-reconstructed
down-going tracks were thrown out.
A new, analytic energy reconstruction method for
cascades was developed that takes into account the
significant depth variation of the optical properties of
the glacial ice at the South Pole [4]. Several more
topological variables with good separation power were
also calculated for each event.
The main background for neutrino-induced cascade
searches comes from the stochastic energy losses suf-
fered by cosmic ray muons as they pass through the ice
surrounding the optical sensors. Two basic variables are
2
M. D’AGOSTINO
et al.
ATMOSPHERIC NEUTRINO-INDUCED CASCADES
log(E) [GeV]
3
3.5
4
4.5
5
5.5
6
-4
10
-3
10
-2
10
-1
10
Atm.
ν
e
+
ν
e
Atm.
ν
μ
+
ν
μ
NC
Atm.
ν
μ
+
ν
μ
CC
True
ν
Surface Energy
Fig. 2. Monte Carlo distributions of true neutrino energy at the earth’s
surface for events surviving a final variable cut value of 0.73.
employed to reduce this background. First, we measure
how far inside the geometric volume of the detector
the reconstructed cascade vertex lies. Muons with a
large stochastic energy loss far inside the detector are
more likely to leave early hits in outer sensors and
can thus be rejected. Second, background separation
becomes easier as the cascade energy increases. This is
because the more energetic stochastic losses that mimic
neutrino-induced cascades would have to come from
more energetic muons, which are more likely to leave
additional light that will allow for their identification.
We therefore expect that more energetic cascades deep
inside the detector will be the easiest signal to separate
from background.
Along these lines, several neural networks were
trained on 12 topological and reconstruction-based vari-
ables, including reconstructed energy and a measure of
containment within the detector. The product of these
variables is taken as the final discriminating cut variable.
Figure 1 shows the number of remaining events as a
function of the cut on this final variable for events that
reconstruct above 5 TeV for signal Monte Carlo and a
10% sub-sample of the available data.
While nothing can yet be concluded from the 10%
data sample alone, the full dataset, which will be pre-
sented in this talk, may show signs of converging to
the signal expectation. Figure 2 shows the true neutrino
energy at the earth’s surface for the three classes of
simulated signal.
REFERENCES
[1] A. Karle for the IceCube Collaboration, “IceCube: Construction
Status and First Results,”
arXiv:0812.3981
, Dec. 2008.
[2] A. Achterberg et al., “Search for neutrino-induced cascades from
gamma-ray bursts with amanda,”
The Astrophysical Journal
, vol.
664, no. 1, pp. 397–410, 2007.
[3] G. D. Barr, T. K. Gaisser, P. Lipari, S. Robbins, and T. Stanev,
“Three-dimensional calculation of atmospheric neutrinos,”
Physi-
cal Review D
, vol. 70, no. 2, p. 023006, Jul. 2004.
[4] E. Middell and M. D’Agostino, “Improved reconstruction of
cascade-like events in icecube,”
These Proceedings
, 2009.
PROCEEDINGS OF THE 31
st
ICRC, !OD´ Z´ 2009
1
First search for extraterrestrial neutrino-induced cascades with
IceCube
Joanna Kiryluk
∗
for the IceCube Collaboration
†
∗
Lawrence Berkeley National Laboratory and University of California Berkeley, Berkeley, CA 94720, USA
†
see special section of these proceedings
Abstract
. We report on the first search for extra-
terrestrial neutrino-induced cascades in IceCube.
The analyzed data were collected in the year 2007
when 22 detector strings were installed and oper-
ated. We will discuss the analysis methods used to
reconstruct cascades and to suppress backgrounds.
Simulated neutrino signal events with a
E
−2
energy
spectrum, which pass the background rejection crite-
ria, are reconstructed with a resolution
∆(log E) ∼
0.27
in the energy range from
∼ 20
TeV to a few
PeV. We present the range of the diffuse flux of
extra-terrestrial neutrinos in the cascade channel in
IceCube within which we expect to be able to put a
limit.
Keywords
: extraterrestrial, neutrino, IceCube
I. I
NTRODUCTION
IceCube is a 1 km
3
Cherenkov detector under con-
struction at the South Pole. Its primary goals are to detect
high energy extra-terrestrial neutrinos of all !avors in
a wide energy range, from ∼100 GeV to ∼100 EeV,
search for their sources, for example active galactic
nuclei and gamma ray bursts, and to measure their
diffuse !ux. When complete, the IceCube detector will
be composed of 4800 Digital Optical Modules (DOMs)
on 80 strings spaced 125 m apart. In addition there will
be 6, more densely populated, Deep Core strings inside
the IceCube detector volume. The array covers an area
of one km
2
at depths from 1.45 to 2.45 km below the
surface [1].
High energy neutrinos are detected by observing the
Cherenkov radiation from secondary particles produced
in neutrino interactions inside or near the detector.
Muon neutrinos in charged current (CC) interactions are
identi"ed by the "nal state muon track [2]. Electron and
tau neutrinos in CC interactions, as well as all !avor
neutrinos initiating neutral current (NC) interactions
are identi"ed by observing electromagnetic or hadronic
showers (cascades). A 10 TeV cascade triggers IceCube
optical modules out to a radius of about 130 m. Cascades
can be reconstructed with good energy resolution, but
limited pointing resolution. The good energy resolution
and low background from atmospheric neutrinos make
cascades attractive for diffuse extraterrestrial neutrino
searches [3].
We present expected sensitivities for the diffuse !ux
of extra-terrestrial neutrinos in the cascade channel in
IceCube. This work uses data collected in 2007 with
the 22 strings that were deployed in IceCube at that
time. The total livetime amounts to 270 days. Ten per
cent of the data were used as a ”burn” sample to
develop background rejection criteria. The results, after
unblinding, will be based on the remaining 90% of the
data, about 240 days.
II. D
ATA AND ANALYSIS
Backgrounds from atmospheric muons, produced in
interactions of cosmic rays with nuclei in the Earth’s
atmosphere form a considerable complication in all neu-
trino searches in IceCube. A "ltering chain developed
using Monte Carlo simulations of muon background and
neutrino signal was used to reject these backgrounds
online and of!ine.
The atmospheric muon background was simulated
with CORSIKA [4]. In addition to the single muon
events, which form the dominant background, an appro-
priate number of overlaying events was passed through
the IceCube trigger and detector simulator to obtain
a sample of coincident muons. The coincident muon
events make a few per cent contribution to the total
trigger rate. The signal, electron neutrino events, was
simulated using an adapted version of the Monte Carlo
generator ANIS [5] for energies from 40 GeV to 1 EeV
and with a E
−2
energy spectrum.
All estimates for the number of signal events later in
the text assume an E
−2
spectrum and !ux strength of:
Φ
model
= 1.0 × 10
−6
(E/GeV)
−2
/(GeV s sr cm
2
). (1)
A. Online filtering
The main physics trigger is a ”simple multiplicity
trigger” (SMT), requiring photon signals in at least 8
DOMs, with the additional requirement of accompany-
ing hits in any of the ±2 neighboring DOMs, each above
a threshold of 1/6 single photoelectron signal and within
a 5 μs coincidence window. Averaging over seasonal
changes of the trigger rate for IC22 was 550 Hz. The
mean SMT rate is generally well reproduced by Monte
Carlo simulation, which gives 565 Hz. Assuming the
!ux given in Eq. 1, approximately 2.7 × 10
3
electron
neutrino events and ∼ 1 × 10
10
background event are
expected to trigger the detector in 240 days.
The backgrounds are suppressed online with "rst-
guess reconstruction algorithms [6]. A "rst guess track
"t assumes that all hits can be projected onto a line,
and that a particle producing those hits travels with
velocity v
line
. In addition a simple cut on sphericity
2
JOANNA KIRYLUK
et al.
EXTRATRRESTRIAL CASCADES WITH ICECUBE
Fig. 1.
The reconstructed center-of-gravity (COG) x after online
"ltering. Data is shown as continuous lines, background Monte Carlo
is shown as dashed lines. Monte Carlo data is normalized to the
experimental number of events.
of the events (EvalRatio
ToI
) is used to select events
with hit topology consistent with cascades. Cut values
used in online "lter are given in Table I. In the case of
cascades, the online "lter reduced the SMT trigger rate
to ∼ 20 Hz, or 3.5 % of the total trigger rate. Monte
Carlo studies show that the "lter retains about 70% of
the simulated signal and rejects 97.5% of the simulated
background that trigger the detector. The Monte Carlo
simulation thus underestimates the overall rate observed
in the data. Otherwise main characteristics are well
reproduced, Fig. 1 which shows the reconstructed center-
of-gravity (COG) x position. The COG is calculated for
each event as the signal amplitude weighted mean of all
hit DOM positions.
B. Offline filtering
The data, after online "ltering and transfer from the
South Pole, were passed through more sophisticated
algorithms to reconstruct both muon tracks and cascades.
This reconstruction uses the maximum-likelihood recon-
struction algorithms described in [2], [6].
Several cuts were applied sequentially, and the inter-
mediate data sets are identi"ed as different levels. Level-
1 is the trigger level and events passing the online
"ltering correspond to Level-2. The rates at different
levels are summarized in Table I.
At Level-3 events were selected with (i) a recon-
structed track zenith angle greater than 73
◦
and (ii)
a difference Llh(track)-Llh(cascade) > −16.2 in the
likelihood parameters of the track and cascade reon-
structions to select cascade-like events. This selection
was optimized for the combined ef"ciency (∼ 80%)
in both atmospheric[7] and extra-terrestrial neutrino
searches and keeps the data at this level common to
both analyses. At Level-4 we require that all cascades
originate inside the detector. In IceCube many muon
tracks that radiate energetic bremsstrahlung or produce
hits in DOMs close to the detector edges can mimic
uncontained cascades. To remove this background of
partial bright muon events we require that the four
earliest hits in the event are inside the "ducial volume
of the detector. The boundaries of the "ducial volume
in x-y are shown in Fig.2 as continuous lines. In the
z direction only an upper boundary was used. It was
set at the position of the 8th DOM from the top.
Approximately 1% of the background events (data and
Monte Carlo) and ∼ 13% of the Monte Carlo signal
events after online "ltering pass Level-3 selections and
satisfy the "ducial volume requirement.
At Level-5 we require that the number of hit DOMs
(NCh) is greater than 20, that the reconstructed track
zenith angle exceeds 69
◦
, and that the event duration,
de"ned as a time difference between the last and "rst
hit DOM, is less than 5 μs. The later cut removes long
events, which are mostly coincident double or triple
muon events typically with a high multiplicity of hit
Fig. 2. a) The y versus x positions of the strings in the IC22 detector
con"guration. b) The reconstructed center-of-gravity (COG) y versus
x position from IC22 data after online "ltering. The continuous lines
show the boundaries of the "ducial volume, which is used in the
analyses to restrict the position of the "rst hits in the event.
PROCEEDINGS OF THE 31
st
ICRC, !OD´ Z´ 2009
3
Fig. 3. The "ll-ratio versus the distance D (de"ned in the text) for signal Monte Carlo (left) muon background Monte Carlo (middle) and
the data (right) for events with COG-z > −100 m (top) and COG-z < −100 m (bottom) The dashed lines show the background cut at level
7 used in the analysis.
DOMs.
At Level-6 we require that the reconstructed cascade
vertex positions x(y) and COG-x(y) agree to within 60
meters, and that the reduced track and cascade recon-
struction likelihood ratio is less than 0.95. For each event
we apply the two track reconstruction algorithm and
require that the reconstructed tracks coincide to within
1μs. This selection mostly removes background events
with coincident muon tracks which are well separated
in time.
At Level-7 stringent selections are made on the DOM
multiplicity and the "ll-ratio. The "ll-ratio quanti"es
the fraction of hit DOMs within a sphere around the
reconstructed cascade vertex position with a radius
2 × D, where D is the average displacement of the
reconstructed cascade vertex with respect to the positions
of the hit DOMs in the event. The "ll-ratio versus the
distance D for signal Monte Carlo, muon background
Monte Carlo, and the data for events with COG-z >
−100m and COG-z < −100 m is shown in Fig.3. The
presently used version of background Monte Carlo is in
good agreement with the data for the top part of the
detector, but not for the bottom part of the detector.
In the bottom part of the detector, the clear ice (less
absorption than at the top of the detector) makes some
muons look like cascade (spherical shape and high DOM
multiplicity). After applying the cuts on the "ll-ratio
and the distance D, as shown by the dashed lines in
Fig.3 , 135 events from the data burn sample and 11
background Monte Carlo events remained. Almost all
of them originate in the bottom part of the detector, as
shown in Fig. 3. Remaining 11 Monte Carlo background
events correspond to an expected ∼ 90 events for the 240
days of the IC22 run.
We placed a "nal Level-8 cut on the reconstructed
energy, log E
reco
> 4.2, which rejects all remaining
background events in the data burn sample and in the
background Monte Carlo.
III. R
ESULTS
The
expected
number
of
signal
events
(NSignal) from a diffuse !ux with a strength of
10
−6
(E/GeV)
−2
/(GeV . s . sr . cm
2
) is 52 ν
e
events
for 240 days of livetime. Signal simulations show that
events that pass all background rejection criteria are in
the energy range from ∼ 20 TeV to a few PeV (with
a mean energy of ∼ 160 TeV). The energy resolution
is ∆(log E) ∼ 0.27, the x and y position resolution is
∼ 10 meters. The z position resolution is worse, 25 m,
because of a small fraction of events that originated
below the detector where no "ducial volume cut was
applied.
A burn sample of ∼ 10% of the total IC22 data set
and the background Monte Carlo sample were used in
developing background rejection criteria. The selections
are such that all events in the burn sample and all
background Monte Carlo events are rejected, whereas
a signi"cant fraction of the signal Monte Carlo events
are retained.
The model rejection factor (MRF) de"ned as: MRF
= ?μ
90
? / NSignal, will be used to determine the !ux
limit:
Φ
limit
= MRF × f (E),
(2)
where f (E) is given by Eq.1.
4
JOANNA KIRYLUK
et al.
EXTRATRRESTRIAL CASCADES WITH ICECUBE
TABLE I
E
VENT RATES AT DIFFERENT SELECTION LEVELS FOR EXPERIMENTAL DATA (BURN SAMPLE), ATMOSPHERIC MUONS BACKGROUND
MONTE CARLO AND ν
e
SIGNAL MONTE CARLO ASSUMING THE FLUX Φ
model
= 1.0 × 10
−6
(E/GeV)
−2
/(GeV . s . sr . cm
2
)
Level
Selection Criteria
Exp Data
Tot Bg MC
Signal MC ν
e
1
Trigger
580 Hz
565 Hz
2.7 × 10
3
× (240 days)
−1
2
v
line
< 0.25 and EvalRatio
ToI
> 0.109
20 Hz
14 Hz
1.8 × 10
3
× (240 days)
−1
3
Zenith > 73
◦
and Llh(track) - Llh(cascade)> −16.2
4 Hz
2.8 Hz
1.3 × 10
3
× (240 days)
−1
4
Fiducial Volume (Fig.2)
0.3 Hz
0.15 Hz
240× (240 days)
−1
5
NCh > 20 and Zenith
32iter
> 69
◦
and EvtLength < 5μs
0.02 Hz
0.01 Hz
165× (240 days)
−1
6
|RecoX − COGX | < 60m and
0.011 Hz
0.004 Hz
161× (240 days)
−1
|RecoY − COGY | < 60m and
ReducedLlh(track) / ReducedLlh(cascade) > 0.95 and
RecoTrack1(Time)-RecoTrack2(Time) < −1μs
7
Fill-Ratio (Fig.3)
6.8 × 10
−5
Hz
4.3 × 10
−6
Hz
68× (240 days)
−1
8
NCh> 60 and log E
reco
> 4.2
0
0
52× (240 days)
−1
The analysis is limited by the currently available
background Monte Carlo sample. It is not possible to
subtract the simulated residual background contribution
with suf"cient precision. Thus the sensitivities for the
diffuse !ux of extraterrestrial neutrino signal, de"ned
as the average upper limit at 90% CL and absence of
signal [9], cannot be determined. To give an order of
magnitude for the limit, a conservative estimate making
no assumptions on background would be 4 × 10
−8
(5 ×
10
−7
)(E/GeV)
−2
/(GeV . s . sr . cm
2
) for a hypotheti-
cal number of observed events after unblinding of 0 (20).
Enclosing, we expect the !ux limit from
this analysis to be of the same order as the
limit on the diffuse !ux Φ
limit
=
1.3 ×
10
−7
(E/GeV)
−2
/(GeV . s . sr . cm
2
) [10] in the
cascade channel as obtained from 5 years of AMANDA
data. Additional background Monte Carlo events
are being generated and systematic uncertainties are
currently being studied.
R
EFERENCES
[1] IceCube, A. Achterberg
et al.
, Astropart. Phys.
26
, 155 (2006);
R. Abbasi et al. Nucl. Instrum. Meth.
A601
(2009) 294.
[2] AMANDA , J. Ahrens
et al.
, Nucl. Instrum. Meth.
A524
(2004)
169.
[3] M. Kowalski, JCAP 05 (2005) 010.
[4] D. Heck
et al.
, Tech. Rep. FZKA, 6019 (1998).
[5] A. Gazizov and M. Kowalski, Computer Physics Communica-
tions Vol. 172
3
(2005) 203.
[6] AMANDA, J. Ahrens
et al.
, Phys. Rev.
D67
(2003) 012003.
[7] M. D’Agostino (for the IceCube Collaboration),
A search for
atmospheric neutrino-induced cascades with IceCube
, these pro-
ceedings.
[8] IceCube, J. Ahrens
et al.
, IceCube Preliminary Design Document
(2001).
[9] G. Feldman and R. Cousins, Phys. Rev.
D57
(1998) 3873.
[10] O. Tarasova, M. Kowalski and M. Walter (for the IceCube
Collaboration), proceedings of the 30th International Cosmic Ray
Conference (ICRC 2007), Merida, Yucatan, Mexico, 3-11 Jul
2007; arXiv:0711.0353 [astro-ph] pages 83-86.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Improved Reconstruction of Cascade-like Events in IceCube
Eike Middell
∗
†
, Joseph McCartin
‡
and Michelangelo D’Agostino
§
for the IceCube Collaboration
¶
∗
DESY, D-15735 Zeuthen, Germany
†
Institut fu¨r Physik, Humboldt-Universita¨t zu Berlin, D-12489 Berlin, Germany
‡
Dept. of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch, New Zealand
§
Dept. of Physics, University of California, Berkeley, CA 94720, USA
¶
See the special section of these proceedings.
Abstract
. Cascade-like events are one of the main
signatures in the IceCube neutrino detector. This sig-
nature includes electromagnetic and hadronic parti-
cle showers from charged or neutral current interac-
tions and hence it provides sensitivity to all neutrino
flavours. At energies below 10 PeV these cascades
have characteristic lengths of only several meters.
Compared to the dimensions of the detector they
appear as point-like but anisotropic light sources.
We present a new approach to the reconstruction of
such events. A maximum likelihood algorithm that
incorporates the results of detailed simulations of the
light propagation in ice, allows for a significantly
better analysis of the recorded photon intensities and
arrival times. The performance of the algorithm is
evaluated in a Monte Carlo study. It suggests that
for cascades an angular resolution of
30
◦
is possible.
Keywords
: IceCube, cascades, reconstruction
I. INTRODUCTION
The IceCube detector [1] is being built at the ge-
ographical South Pole. It aims for the detection of
neutrinos of cosmic origin, which could answer open
questions in astroparticle physics such as the origin
of cosmic rays and the nature of dark matter. In its
originally planned setup the IceCube detector consists
of 4800 digital optical modules (DOMs) on 80 strings.
These are horizontally spaced by 125 m and located in
depths ranging from 1
.
45 to 2
.
45 km, thereby spanning
a volume of a cubic kilometer of glacial ice. In order
to lower IceCube’s energy threshold down to 10 GeV,
the DeepCore extension will arrange 6 additional strings
in the center of the array. On these strings the DOMs
are closer to each other and are located in depths with
optimal optical properties.
Each DOM contains a photomultiplier tube (PMT)
and the necessary readout electronics. Two digitization
devices allow for the measurement of time distributions
and intensities of photon fluxes inside the detector: the
Analog Transient Waveform Digitizer (ATWD) taking
128 samples over the first 420 ns and the Flash Analog-
to-Digital Converter (FADC) taking 256 samples in an
interval of 6
.
4
μ
s [2]. Presently three quarters of the
detector are successfully deployed and are taking data.
Neutrinos can interact in the instrumented volume
through neutrino-nucleon or neutrino-electron scattering.
The former process dominates. One exception is the
resonant scattering of anti-electron neutrinos on atomic
electrons at energies of 6
.
3 PeV, known as the Glashow
resonance. The neutrino interaction is not detected di-
rectly but it can produce charged particles which emit
Cherenkov light in the transparent detector medium. The
possible final states of a neutrino interaction depend on
the flavour and interaction type. For neutrino astronomy
the most prominent neutrino signature is formed by final
states with an emerging muon. They allow to deduce
the neutrino direction and provide large effective areas
because of the large range of the muon.
The signatures of interest here are neutrino induced
electromagnetic and hadronic particle showers. Such
cascades can originate from all neutrino flavours and oc-
cur in many of the interaction scenarios. Assuming that
the neutrinos were generated in pion decays one expects
a flavour ratio at the source of
ν
e
:
ν
μ
:
ν
τ
=
1 : 2 : 0.
Due to neutrino oscillations this ratio is transformed to
1 : 1 : 1 before detection, which makes the sensitivity to
all flavours important.
Furthermore, electromagnetic cascades allow for a
good energy reconstruction, since the number of emit-
ted photons scales linearly with the deposited energy.
Hadronic cascades appear similar to electromagnetic
ones, with the small correction that for the same de-
posited energy there are about 20% fewer photons pro-
duced [3].
Below 10 PeV cascades have characteristic lengths of
several meters. Compared to the distances between the
DOMs they appear as point-like light sources. Nev-
ertheless, the angular emission profile of a cascade
is anisotropic: the photons originate from one point
but they are preferably emitted in the direction of the
Cherenkov angle
Θ
c
=
41
◦
[4]. Therefore, close to the in-
teraction vertex the neutrino direction can in principle be
derived from the angular distribution of the Cherenkov
photons. For the large spacing of the DOMs this ability is
impaired due to the strong light scattering in the ice [5].
Because of this inherent difficulty of reconstructing the
direction of particle showers in ice, studies of these
events have been restricted to the search for a diffuse
flux of neutrinos. In this situation even a rough estimate
2
E.MIDDELL
et al.
IMPROVED RECONSTRUCTION OF CASCADE-LIKE EVENTS IN ICECUBE
on the neutrino detection would enhance the possibilities
of this detection channel.
II. NEW APPROACH TO CASCADE RECONSTRUCTION
The existing maximum likelihood reconstruction for
cascades [3] does not account for the inhomogeneity of
the ice and does not try to reconstruct the neutrino
direction. It also does not exploit all the capabilities of
the IceCube DAQ.
The aim of the current work is to use all relevant
information in the waveforms captured by the DOMs
to reconstruct the incident neutrino in a cascade-like
event. The point-like but directed cascade can be fully
described by 7 parameters: the time and vertex
(
t
,
x
,
y
,
z
)
of the neutrino interaction, the deposited energy
E
and
the direction of the neutrino. The latter is described
by the two angles zenith
Θ
and azimuth
φ
. The re-
construction searches for the set of these parameters
c
= (
t
,
x
,
y
,
z
,
E
,
Θ
,
Φ)
that fits the observation best.
A good understanding of the optical properties of
the glacial ice is crucial to the IceCube experiment.
The instrumented volume is pervaded with dust layers
that track historic climatological changes. Since the
propagation of light in such an inhomogeneous medium
cannot be treated analytically, the Photonics Monte Carlo
package [6], [7] has been used. Its simulation results
are available in tabulated form. For a given setup of
a light source and a DOM these tables allow to make
predictions for the mean expected amplitude
?
μ
(
c
)
?
and
the photon arrival time distribution
p
(
t
d
,
c
)
, where
t
d
denotes the delay time. For a photon with speed
c
ice
that is emitted at
(
t
e
,?
x
e
)
and recorded at
(
t
r
,?
x
r
)
the time
t
d
=
t
r
t
e
|
?
x
r
?
x
e
|
/
c
ice
denotes the additional time the
photon takes to reach the receiver over a scattered path
rather than a straight line. Scattering in ice can cause
delay times up to a few microseconds. Depending on
orientation and distance of the cascade with respect to
the DOM the arrival time distributions differ in shape
(compare Figure 1).
With the tabulated quantities the expected amplitude
in a time interval
[
t
1
,
t
2
]
calculates to:
μ
(
c
) =
f
?
μ
(
c
)
?
Z
t
2
t
1
p
(
t
d
,
c
)
dt
d
+
R
noise
(
t
2
t
1
)
(1)
Two small corrections are applied to the prediction of
Photonics. A constant rate
R
noise
accounts for noise
hits and a factor
f
corrects for deviations from the
mean amplitude due to the PMT response and charge
reconstruction, which is not modelled by Photonics.
With this prediction a likelihood description of the
measurement is possible. Assuming a Poisson process
for every distinct
1
sample
i
taken by the ATWD and
the FADC in DOM
o
, one can compare the measured
amplitude
n
oi
to the mean expectation
μ
oi
and construct
the likelihood:
L
=
∏
o
,
i
μ
oi
(
c
)
n
oi
n
oi
!
exp
{
μ
oi
(
c
)
}
.
(2)
0
500
1000
1500
2000
2500
3000
3500
4000
delay time [ns]
0.0000
0.0005
0.0010
0.0015
0.0020
dP/dt [1/ns]
distance= 100mdistance= 100m
toward
backward
0
500
1000
1500
2000
2500
3000
3500
4000
delay time [ns]
0.0000
0.0001
0.0002
0.0003
0.0004
0.0005
0.0006
0.0007
0.0008
dP/dt [1/ns]
distance= 300mdistance= 300m
toward
backward
Fig. 1. Tabulated delay time distributions for a DOM at 100 m and
300 m distance to the cascade. The distributions are shown for two
orientations of the cascade, pointing either toward or away from the
DOM. Photons are increasingly delayed if they either travel larger
distances or have to be backscattered to reach the DOM.
By taking the negative logarithm and rearranging the
terms one obtains:
log(
L
) =
∑
o
?
μ
o
?
n
o
log
?
μ
o
?
∑
i
n
oi
log
?
μ
oi
?
μ
o
?
?
(3)
where
?
μ
o
?
=
∑
i
μ
oi
and
n
o
=
∑
i
n
oi
. The combinatorial
term from the Poisson probability has been omitted since
it does not depend on the reconstruction hypothesis.
A considerable speedup in the computation results
from the fact that in the sum over the samples
i
only time
intervals with
n
oi
>
0 contribute. Hence, periods in the
DOM readout with no measured charge can be ignored.
Practically this is implemented in two steps: first the
waveform is scanned for pulses, then these pulses are
used to calculate the likelihood.
The cascade reconstruction is performed by searching
numerically for the minimum of
log(
L
)
, which is a
function of the seven cascade parameters. This mini-
mization is seeded with the time, vertex and direction
estimates that one obtains from calculating the center
of gravity and tensor of inertia of the hit pattern. These
calculations are implemented in IceCube’s first-guess re-
construction algorithms. The number of triggered DOMs
provides a rough estimate of the deposited energy. The
minimization is done by MINUIT with a simplex algo-
rithm that is executed iteratively to improve the result
stepwise.
The problem can be significantly simplified if the
vertex and the time of the interaction are already known
(e.g. when they are determined by another method) and
the orientation of the cascade is neglected. Then the
likelihood, which now only depends on the cascade
energy, provides an energy reconstruction that benefits
1
In the first 420 ns the readout windows of the ATWD and FADC
overlap. One has to choose between both measurements. Because of
its precision, the samples from the ATWD are preferred.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
60
40
20 0 20 40 60
x
reco
✁
x
true
[m]
0
500
1000
1500
2000
✂
=0.18m
✄
=6.16m
60
40
20 0 20 40 60
z
reco
✁
z
true
[m]
0
500
1000
1500
2000
2500
3000
3500
4000
✂
=0.17m
✄
=3.98m
Fig. 2.
Offsets between the reconstructed and the true
x
and
z
coordinates obtained from an iterative minimization of the 7 dimensional
likelihood. Only cascades are selected, whose reconstructed vertex is contained in IC40. The width
σ
of a fitted Gaussian defines the resolution,
which is better for
z
because of the denser DOM spacing along the string.
☎
4
☎
3
☎
2
☎
10 1
2
3
4
log
10
(E
reco
/E
true
)
0
1000
2000
3000
4000
5000
6000
✆
=0.05
✝
=0.13
2
4
6
8 10
log
10
(E
true
/GeV)
2
4
6
8
10
log
10
(
E
reco
/GeV
)
Fig. 3. Left: Offset between the reconstructed and the deposited logarithmic energy for the same event sample. Right: Comparison between
the reconstructed and the deposited logarithmic energy. The deviation from the identity line above 10 PeV illustrates the increasing impact of
saturation effects on the energy reconstruction.
from the improved light-propagation model. In this case,
the search for the minimum is reduced to a numerical
root finding problem:
∂(
log(
L
))
∂
E
=
∑
o
μ
o
n
o
1
+
R
noise
Δ
t
μ
o
?
=0
(4)
where
Δ
t
denotes the readout window length.
III. RESULTS
The reconstruction algorithm has been tested with a
simulated electron neutrino dataset for IceCube in its
year 2008 configuration with 40 strings. The primary
neutrinos have energies in the range from 10
1
.
7
GeV to
10
10
GeV and are weighted to an
E
2
spectrum. For
the simulation of showers the parametrization derived
in [4] and implemented in Photonics is used. Lower
energetic showers (
<
PeV) are represented as point-like
light source with an anisotropic emission profile. At PeV
energies the cascade is split up into several cascades to
simulate the elongation due to the LPM effect.
To be part of the further on used event selection, an
event has to trigger the detector, the reconstruction must
converge (fulfilled by 79% ) and the reconstructed vertex
has to be located inside the geometric boundaries of the
detector (fulfilled by 38%).
To evaluate the resolution of the reconstruction the
distribution of offsets between the reconstructed and
the true vertex coordinates and energies are shown in
Figures 2 and 3. The obtained vertex resolutions are
about 7 m in
x
and
y
and 4 m in
z
. This is an improvement
with respect to the existing likelihood reconstruction [8].
For the same dataset and selection criteria it yields
resolutions of 15m in
x
and
y
and 8m in
z
. The better
resolution in
z
results from the smaller distances of only
4
E.MIDDELL
et al.
IMPROVED RECONSTRUCTION OF CASCADE-LIKE EVENTS IN ICECUBE
✞
1.0
✞
0.5 0.0 0.5 1.0
cos(
✟
)
500
1000
1500
2000
2500
med = 0.71
2345678
log
10
(E
true
)
0
10
20
30
40
50
60
70
80
90
median
✠
(
)[
deg
]
Fig. 4. Left: Distribution of the cosine of the angle between the reconstructed and the true direction. The angular resolution is given by the
median. Right: Angular resolution as a function of the energy.
17 m between the DOMs on one string.
The result of the energy reconstruction is shown in
Figure 3. A resolution of
σ(log
10
(
E
reco
/
E
true
)) =
0
.
13
has been obtained. For large photon fluxes, which can
originate from highly energetic or nearby cascades, the
saturation of the PMT limits the recorded charge. This
affects the energy reconstruction as can be seen in the
right plot of Figure 3. Above 10 PeV the reconstructed
energy is systematically too low due to the saturation.
A useful measure for the angular resolution is the
median of the cos(Ψ) distribution, where
Ψ
is the
angle between the true and the reconstructed direction.
For all events that fulfill the selection criterion this
distribution is plotted in the left plot of Figure 4. A
study of the energy dependence suggests that for the
interesting energy range of 10 TeV to 10 PeV an angular
resolution of 30
◦
35
◦
is possible (right plot in Figure
4). At energies above 10 PeV, the LPM effect leads
to an elongation of the cascade and the reconstruction
hypothesis of a point-like light source becomes no longer
applicable.
IV. SUMMARY AND OUTLOOK
A maximum likelihood reconstruction for cascade-like
events has been developed. It takes into account the full
recorded waveform information as well as the ice proper-
ties. A simulation study for the 40 string detector geom-
etry of the year 2008 demonstrates the feasibility of an
angular resolution of down to 30
◦
. Compared to muons
this is still a very limited precision, but it can provide
new opportunities for neutrino searches with cascade-
like events. With the angular resolution achieved, the
discrimination between upward and downward going
neutrinos becomes possible as well as the identification
of neutrinos originating from the galactic plane. With the
DeepCore extension a further improvement is expected.
The achieved results have to withstand further ver-
ification. The next step is to test the performance of
the algorithm on measurements with LED and laser
light sources in the detector and muon events with
bright bremsstrahlung cascades. Several possibilities to
enhance the algorithm exist. A different description of
saturated DOMs in the likelihood could improve the
performance at higher energies. It will be investigated
if the shape of the likelihood could be used to estimate
the error of the reconstruction. Finally, the presented ap-
proach can be extended to reconstruct combined events
with more than one light source in the detector.
REFERENCES
[1] A. Achterberg et al., Astropart.Phys.26:155-173 (2006)
[2] R. Abbasi et al., Nucl.Instrum.Meth.A601:294-316 (2009)
[3] M. Kowalski, PhD-Thesis, Humboldt University to Berlin (2004)
[4] C. Wiebusch, PhD-Thesis, RWTH Aachen, PTHA 95/37 (1995)
[5] M. Ackermann et al., J. Geophys. Res., 111, D13203 (2006)
[6] http://photonics.sourceforge.net
[7] J. Lundberg et al., Nucl.Instrum.Meth.A581:619-631 (2007)
[8] J. Kiryluk et al., in Proc. of 30th ICRC (2007)
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Searches for neutrinos from GRBs with the IceCube 22-string
detector and sensitivity estimates for the full detector
A. Kappes
∗†
, P. Roth
‡
, E. Strahler
∗
, for the IceCube Collaboration
§
∗
Physics Dept. University of Wisconsin, Madison WI 53703, USA
†
affiliated with Universita¨t Erlangen-Nu¨rnberg, D-91058 Erlangen, Germany
‡
Physics Dept. University of Maryland, College Park MD 20742, USA
§
see special section of these proceedings
Abstract
. This contribution presents results of
searches with IceCube in its 22-string configuration
for neutrinos from 41 stacked gamma-ray bursts
(GRBs) detected in the northern sky by satellites like
Swift. In addition, the capabilities of the full 80-string
detector based on a detailed simulation are discussed.
GRBs are among the few potential source classes for
the highest energy cosmic rays and one of the most
puzzling phenomena in the universe. In their ultra-
relativistic jets, GRBs are thought to produce neutri-
nos with energies well in excess of 100 TeV. However,
up to now, no such neutrino has been observed.
IceCube, currently under construction at the South
Pole, is the first km
3
scale neutrino telescope. As
such it will have a significantly improved sensitivity
compared to the precursor class of 0.01 km
3
neutrino
telescopes.
Keywords
: Gamma-Ray Bursts, Neutrinos, IceCube
I. INTRODUCTION
Gamma-ray Bursts (GRBs) have been proposed as a
plausible source of the highest energy cosmic rays [1]
and high energy neutrinos [2]. The prevalent belief is
that the progenitors of so called
long-soft
GRBs are very
massive stars that undergo core collapse leading to the
formation of a black hole.
Short-hard
GRBs are believed
to be the product of the merger of binary compact
objects such as neutrons stars and black holes leading to
the creation of a single black hole. Material is ejected
from the progenitor in ultra-relativistic jets. In these jets,
electrons and baryons are accelerated to high energies,
where the synchrotron radiation from the electrons is
observed as the prompt
γ
-ray signal. Neutrinos are
predicted to be produced in the interaction of accelerated
baryons with matter or photons in various phases of
the GRB:
TeV precursor
—while the jet burrows through
the envelope of the progenitor of a long-soft burst [3];
PeV prompt
—in coincidence with the observed
γ
-ray
signal [2];
EeV early afterglow
—as the jet collides with
interstellar material or the progenitor wind in the early
afterglow phase [4].
IceCube is a high energy (
E ? 1
TeV) neutrino tele-
scope currently under construction at the South Pole [5].
When completed, the deep ice component of IceCube
will consist of 5160 digital optical modules (DOMs)
arranged in 86 strings frozen into the ice, at depths
ranging from 1450 m to 2450 m. Each DOM contains
a photo-multiplier tube and supporting hardware inside
a glass pressure sphere. The total instrumented volume
of IceCube will be
∼ 1 km
3
. The DOMs indirectly detect
neutrinos by measuring the Cherenkov light from sec-
ondary charged particles produced in neutrino-nucleon
interactions. Presently, 59 strings are installed and col-
lect data continuously. Construction is scheduled for
completion by 2011. AMANDA-II, IceCube’s prede-
cessor array, operated between January 2000 and May
2009. It consisted of 677 optical modules arranged on
19 strings with an instrumented volume approximately
60 times smaller than that of IceCube. Searches with
AMANDA-II for neutrinos in coincidence with GRBs
have been reported with negative results [6], [7].
The two main channels for detecting neutrinos with
IceCube are the muon and the cascade channels. Charged
current interactions of
ν
µ
produce muons that, at TeV
energies, travel for several kilometers in ice and leave a
track-like light pattern in the detector. The detectors are
mainly sensitive to up-going muon neutrinos as the Earth
can be used to shield against the much larger flux of (up-
going) atmospheric muons. Searches for neutrinos from
GRBs in the muon channel benefit from good angular
resolution (
∼ 1
◦
for
E
ν
> 1
TeV) and from the long
range of high energy muons. Therefore, we use this
channel in our analyses.
II. ICECUBE 22-STRING RESULTS
In our analyses, we search the IceCube 22-string con-
figuration data, collected between May 2007 and April
2008, for muon neutrinos from GRBs in the northern
hemisphere. In [8] further analyses using IceCube 22-
string data are presented which extend the muon neutrino
search to GRBs in the southern sky and use the cascade
channel to search for neutrinos of all flavors from GRBs
in both hemispheres, respectively.
We perform our searches both in the prompt (defined
by the observed
γ
-ray emission) and the precursor (100 s
before the prompt time window) time windows. In
order to account for alternative emission scenarios, an
additional search is conducted in an extended window
from
1
h to
+3
h around the burst. The data outside
2
A. KAPPES
et al.
GRB SEARCHES WITH ICECUBE
E
ν
(GeV)
3
10
4
10
5
10
6
10
7
10
8
10
9
10
)
-2
dN/dE (GeV cm
×
2
E
-8
10
-7
10
-6
10
-5
10
-4
10
-3
10
-2
10
10
-1
1
10
2
10
3
10
4
10
41 individual bursts
Sum of 41 individual bursts
Average WB burst
Sum of 41 WB bursts
preliminary
Fig. 1.
Neutrino spectra for all 41 GRBs investigated in the
analyses. The fluences were calculated for each burst individually using
measured bursts parameters following [11]. For comparison, average
Waxman-Bahcall GRB fluences (WB, [2]) are shown.
these windows (off-time data) is used to estimate the
background in the search windows.
To prevent bias in our analyses, the data within the
1
h to
+3
h window (on-time data) are kept blind
during optimization. Only low level quantities of the on-
time data are examined in order to determine stability.
The remaining, usable off-time data amounts to 269
days of livetime. Of the 48 northern hemisphere bursts
detected by satellites (mainly Swift [9]), 7 do not have
quality IceCube data associated with them during the
prompt/precursor emission windows. For all remaining
41 GRBs, tests show no indications of abnormal behav-
ior of the detector.
As customary, we use the Waxman-Bahcall model as
a benchmark for neutrino production in GRBs. The orig-
inal calculation with this model [2] used average GRB
parameters as measured by BATSE [10]. It was refined
by including specific details for individual GRBs [11].
Our neutrino calculations follow the latter prescription.
For many GRBs the available information is incomplete.
In that case we use average parameters in the modeling
of the neutrino flux. The individual burst neutrino spectra
are displayed in Fig. 1.
Tracks are reconstructed using a log-likelihood recon-
struction method [12]. A fit of a paraboloid to the region
around the maximum in the log-likelihood function
yields an estimate of the uncertainty on the reconstructed
direction. Initially, candidate neutrino events are out-
numbered (by several orders of magnitude) by down-
going atmospheric muons that are mis-reconstructed as
up-going events. Application of data selection criteria
allows us to extract a high-purity sample of up-going
(atmospheric, and potentially astrophysical) neutrinos.
In order to determine our detector response to the
expected GRB neutrinos, we simulate these signal events
using ANIS [13]. Background from atmospheric muons
is simulated with CORSIKA [14]. Propagation of neu-
trinos and muons through the Earth and ice are per-
formed with ANIS and MMC [15]. The photon signal
at the DOMs is determined from a detailed simulation
[16] of the propagation of Cherenkov light from muons
and showers through the ice. The simulation of the DOM
Fig. 2. The SVM classifier distribution of off-time data, simulated
backgrounds (dashed), and simulated GRB muon neutrinos (solid).
The lower frame displays the MDF resulting from a cut on the SVM
classifier. The vertical dashed line shows the final tightened cut at 0.25.
response takes into account the DOM’s angular accep-
tance and includes a simulation of the DOM electronics.
The DOM output is then processed with a simulation of
the trigger. Afterwards, the simulated events are treated
in the same way as the real data.
A. Binned analysis
We perform a binned analysis searching for emission
during the prompt phase. After a loose preselection of
events, various quality parameters are combined using a
machine learning algorithm. The algorithm used was a
Support Vector Machine (SVM) [17] with a radial basis
function kernel. The SVM was trained using the off-
time filtered data as background and all-sky neutrino
simulation weighted to the sum of the individual burst
spectra as signal. The optimum SVM parameters (kernel
parameter, cost factor, margin) were determined using a
coarse, and then fine, grid search with a 5-fold cross
validation technique at each node [18].
The resulting SVM classification of events is shown
in Fig. 2. The final cut on this parameter is optimized to
detect a signal fluence with at least 5
σ
(significance)
in 50% of cases (power) by minimizing the Model
Discovery Factor (MDF). The MDF is the ratio be-
tween the signal fluence required for a detection with
the specified significance and power and the predicted
fluence [19]. The angular cut around each GRB is then
calculated to keep 3/4 of the remaining signal after the
cut on the SVM classifier. In this way, there is one
cut on the SVM classifier for all GRBs, but different
angular cuts around each GRB according to the angular
resolution of the detector in that direction. The SVM cut
that returns the best sensitivity is at 0.22. This cut lies
directly on a discontinuity in the MDF curve, and so it
is tightened away from that discontinuity so that a 1
σ
underestimation of the background level will not lead to
a discovery claim more significant than is appropriate.
B. Unbinned likelihood analysis
We compare the performance of the binned analysis
for the prompt emission to that of an unbinned likelihood
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
analysis. Furthermore, we use the unbinned method to
look for neutrino emission in the precursor and extended
time window. The unbinned method used here is similar
to that described in [20]. The signal,
S (?x
i
)
, and back-
ground,
B(?x
i
)
, PDFs are formed from a product of a
directional, time and an energy PDF.
Signal PDF:
The directional signal PDF is a two-
dimensional Gaussian distribution with the two widths
being the major and minor axes of the
1σ
error ellipse
of the paraboloid fit. The time PDF is flat over the
respective time window and falls off on both sides with a
Gaussian distribution of variable width depending on the
duration of the emission. The energy PDF is determined
from the distribution of an energy estimator [21] for each
GRB individually. The signal PDFs of the GRBs are
combined using a weighted sum [22]
S
tot
(?x
i
)=
?
N
GRBs
j=1
w
j
S
j
(?x
i
)
?
N
GRBs
j=1
w
j
,
(1)
where
S
j
(?x
i
)
is the signal PDF of the
j
th GRB and
w
j
is a weight that for of the prompt and precursor window
is proportional to the expected number of events in the
detector according to the calculated fluences. In the case
of the extended window we use
w
j
= 1
for all GRBs.
Background PDF:
For the directional background
PDF obtained from the off-time data, the detector asym-
metries in zenith and azimuth are taken into account by
evaluating the data in the detector coordinate system.
The time distribution of the background during a GRB
can be assumed to be constant, yielding a flat time PDF.
The energy PDF is determined in the same way as for
the signal PDF with the spectrum corresponding to the
Bartol atmospheric neutrino flux [23].
All PDFs are combined in a log-likelihood ratio
ln(R) = ? n
s
? +
?
N
i=1
ln
?
?n
s
? S
tot
(?x
i
)
?n
b
? B(?x
i
)
+1
?
(2)
where the sum runs over all reconstructed tracks in the
final sample. The variable
?n
b
?
is the expected mean
number of background events, which is determined from
the off-time data set. The mean number of signal events,
?n
s
?
, is a free parameter which is varied to maximize
equation 2 in order to obtain the best estimate for the
mean number of signal events,
?nˆ
s
?
.
To determine whether a given data set is compatible
with the background-only hypothesis,
10
8
background
data sets for the on-time windows are generated from
off-time data by randomizing the track times while
taking into account the downtime of the detector. For
each of these data sets the
ln(R)
value is calculated.
The probability for a data set to be compatible with
background is given by the fraction of background data
sets with an equal or larger
ln(R)
value.
The analysis is performed on a high-purity up-going
neutrino sample after tight selection criteria have been
applied. The unbinned likelihood method requires an
∼ 1.8
times lower fluence for a
5σ
detection than the
E
ν
(GeV)
4
10
5
10
6
10
7
10
8
10
9
10
)
-2
fluence (GeV cm
×
2
E
-5
10
-4
10
-3
10
-2
10
-1
10
1
10
2
10
E
ν
(GeV)
4
10
5
10
6
10
7
10
8
10
9
10
)
-2
fluence (GeV cm
×
2
E
-5
10
-4
10
-3
10
-2
10
-1
10
1
10
2
10
preliminary
Fig. 3. Light lines—Predicted fluences from the 41 northern hemi-
sphere GRBs for different emission models: prompt (solid, sum of
individual spectra as plotted in Fig. 1) and precursor (dashed, [3]).
Dark lines—90% C.L. upper limits on the neutrino fluences obtained
with the unbinned likelihood analysis.
binned method. The former is therefore used for the
results presented in this paper.
The unblinding procedure involves applying the like-
lihood method to the on-time data set after neutrino can-
didate event selection. For all three emission scenarios
the best estimate for the number of signal events (
?nˆ
s
?
)
is zero and hence consistent with the null hypothesis.
Figure 3 displays preliminary 90% C.L. upper limits
for the 41 GRBs on the fluence in the prompt phase
(sum of individual spectra as plotted in Fig. 1) of
3.7 ×
10
3
ergcm
2
(72 TeV – 6.5 PeV) and on the fluence
from the precursor phase [3] of
1.16 × 10
3
erg cm
2
(2.2 TeV – 55 TeV), where the quoted energy ranges
contain 90% of the expected signal events in the detector.
The limits obtained are not strong enough to constrain
the models. The preliminary 90% C.L. upper limit for
the wide time window is
2.7 × 10
3
erg cm
2
(3 TeV –
2.8 PeV) assuming an
E
2
flux.
III. ICECUBE SENSITIVITY STUDY FOR THE FULL
DETECTOR
Previous studies have estimated the sensitivity of the
completed IceCube to neutrino fluxes from GRBs [24],
[25]. We present new results using updated information
about the detector, improved simulation, and more ac-
curate calculation of the backgrounds.
We utilize the same methods as in the 22-string search
to study the sensitivity of the full 86 string detector.
We generate a set of fake GRBs by sampling from the
populations observed by the Swift [9] and Fermi [26]
satellites and taking their observation rates into account.
We distribute these bursts isotropically over the sky and
randomly in time to produce a set of 142 fake GRBs
in the northern sky for a detector livetime of one year,
with 6840 s of total emission during the prompt phase.
Currently, we use the average Waxman-Bahcall GRB
neutrino flux for all bursts [2]. In the future, it will
be replaced by individual spectra. To test the precursor
phase, we assume each burst has such emission [3]
4
A. KAPPES
et al.
GRB SEARCHES WITH ICECUBE
10
E
ν
(GeV)
log
2
3
4
5
6
7
8
9
)
2
effective area (m
μ
ν
-5
10
-4
10
-3
10
-2
10
-1
10
1
10
2
10
3
10
4
10
5
10
0
°
<
δ
< 90
°
0
°
<
δ
< 19
°
19
°
<
δ
< 42
°
42
°
<
δ
< 90
°
preliminary
Fig. 4. Muon neutrino effective area for the full IceCube detector as a
function of energy. Solid line is averaged over the half sky, while dot-
dashed, dotted, and dashed lines represent the most horizontal, middle,
and most vertical thirds of the northern sky in
cos δ
, respectively.
lasting for 100 s immediately preceeding the observed
photons.
As no off-time data is available to determine the
background, it is simulated. Atmospheric muons and
neutrinos are generated over the full sky and propagated
to the detector in the same manner as outlined in
section II. The geometry of the full detector is simu-
lated in determining the response to Cherenkov photons.
Signal and background are filtered with cuts on quality
parameters to create a sample of well-reconstructed,
seemingly upgoing events. Further event selection with
a machine learning algorithm [27] is then performed
to remove the remaining misreconstructed downgoing
muons. Afterwards, no atmospheric muons remain in the
sample due to the limited amount of Monte Carlo. As
no real data is available for comparison, the exact purity
of the remaining background sample is unknown, but
is estimated to consist of
> 95%
atmospheric neutrinos
below the horizon, while retaining a large fraction of
GRB signal neutrinos. The effective area for different
declination bands is shown in Fig. 4. Given the detector
angular resolution of
∼ 1
◦
, we select a search bin radius
of
2
◦
around each fake GRB location, retaining 70–90%
of signal neutrinos (depending on declination) while
dramatically reducing the isotropic background rate. The
background is then rescaled to match the emission time
window for each burst.
First results of this study indicate that we will be able
to detect neutrinos from GRBs in either phase at the
5σ
level in greater than 90% of potential experiments
within the first few years of operating the full detector.
In the event of non-detection, we will be able to set strict
upper limits well below the fluences predicted by these
models.
IV. CONCLUSIONS
We have presented results of searches for muon neu-
trinos from GRBs with the 22-string configuration of
the IceCube detector. These searches covered several
time windows corresponding to the various phases of the
predicted emission. In all cases, the data were consistent
with the background only hypothesis. Hence, we place
upper limits on the muon neutrino fluences from the
different phases, which, however, are not tight enough
to constrain any model yet.
We are also performing a detailed sensitivity study
for the full 80-string IceCube detector. The preliminary
results of this study show that IceCube will be able to
detect the neutrino flux predicted by the leading models
with a high level of significance within the first few
years of operation or, in the event of no observation,
place strong constraints on emission of neutrinos from
GRBs.
V. ACKNOWLEDGMENTS
A. Kappes acknowledges the support by the EU Marie
Curie OIF Program.
REFERENCES
[1] E. Waxman,
Phys. Rev. Lett.
, vol. 75, p. 386, 1995.
[2] E. Waxman and J. N. Bahcall,
Phys. Rev. Lett.
, vol. 78, p. 2292,
1997.
[3] S. Razzaque, P. Meszaros, and E. Waxman,
Phys. Rev.
, vol. D68,
p. 083001, 2003.
[4] E. Waxman and J. N. Bahcall,
ApJ
, vol. 541, p. 707, 2000.
[5] A. Karle, (IceCube Coll.)
et al.
, Preprint arXiv:0812.3981, 2008.
[6] A. Achterberg, (IceCube Coll.)
et al.
,
ApJ
, vol. 664, p. 397, 2007.
[7] ——,
ApJ
, vol. 674, p. 357, 2008.
[8] K. Hoffman, K. Meagher, P. Roth, I. Taboada, (IceCube Coll.)
et al.
, in
this proceedings, ID515
, 2009.
[9] D. N. Burrows
et al.
,
Space Sci. Rev.
, vol. 120, p. 165, 2005.
[10] G. J. Fishman
et al.
, in
Proc. GRO Science Workshop
, GSFC,
1989, p. 2.
[11] D. Guetta
et al.
,
Astropart. Phys.
, vol. 20, p. 429, 2004.
[12] J. Ahrens, (AMANDA Coll.)
et al.
,
Nucl. Inst. Meth.
, vol. A524,
p. 169, 2004.
[13] A. Gazizov and M. O. Kowalski,
Comp. Phys. Comm.
, vol. 172,
p. 203, 2005.
[14] D. Heck
et al.
, “Corsika: A Monte Carlo code to simulate
extensive air showers,” FZKA, Tech. Rep., 1998.
[15] D. Chirkin and W. Rhode, Preprint hep-ph/0407075, 2004.
[16] J. Lundberg
et al.
,
Nucl. Inst. Meth.
, vol. A581, p. 619, 2007.
[17] C. Cortes and V. Vapnik,
Machine Learning
, vol. 20, no. 3, p.
273, 1995.
[18] C. Hsu, C. Chang, and C. Lin, “A practical guide to support
vector classification,” Department of Computer Science, National
Taiwan University, Tech. Rep., 2003.
[19] G. C. Hill, J. Hodges, B. Hughey, A. Karle, and M. Stamatikos,
in
Proc. PHYSTAT O5: Statistical Problems in Particle Physics
,
Oxford, United Kingdom, Sep 2005.
[20] J. Braun, J. Dumm, F. de Palma, C. Finley, A. Karle, and
T. Montaruli,
Astropart. Phys.
, vol. 29, p. 299, 2008.
[21] J. Zornoza, D. Chirkin, (IceCube Coll.)
et al.
, in
Proc. Interna-
tional Cosmic Ray Conference (ICRC’07)
, Me´rida, Mexico, Aug.
2007.
[22] R. Abbasi, (HiRes Coll.)
et al.
,
ApJ
, vol. 636, p. 680, 2006.
[23] G. D. Barr
et al.
,
Phys. Rev.
, vol. D70, p. 023006, 2004.
[24] J. Ahrens, (IceCube Coll.)
et al.
,
Astropart. Phys.
, vol. 20, p.
507, 2004.
[25] A. Kappes, M. Kowalski, E. Strahler, I. Taboada, (IceCube Coll.)
et al.
, in
Proc. International Cosmic Ray Conference (ICRC’07)
,
Me´rida, Mexico, Aug. 2007.
[26] A. A. Moiseev
et al.
,
Nucl. Inst. Meth.
, vol. A588, p. 41, 2008.
[27] D. Chirkin, (IceCube Coll.)
et al.
, in
this proceedings, ID0466
,
2009.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Search for neutrinos from GRBs with IceCube
K. Meagher
∗
, P. Roth
∗
, I. Taboada
†
, K. Hoffman
∗
, for the IceCube Collaboration
‡
∗
Physics Dept. University of Maryland, College Park MD 20742, USA
†
School of Physics and Center for Relativistic Astrophysics, Georgia Institute of Technology, Atlanta, GA 30332, USA
‡
see special section of these proceedings
Abstract
. Gamma-ray bursts (GRBs) are one of the
few potential sources for the highest energy cosmic
rays and one of the most puzzling phenomena in
the universe. In their ultra relativistic jets, GRBs
are thought to produce neutrinos with energies well
in excess of 100 TeV. IceCube, a neutrino telescope
currently under construction at the South Pole, will
have improved sensitivity to these yet unobserved
neutrinos. This contribution describes the methods
used for all IceCube neutrino searches from GRBs
triggered by satellites. We also present the status
of three searches for neutrinos in coincidence with
GRBs. The first search seeks to extend existing Ice-
Cube 22-string
ν
µ
searches to the high background
southern hemisphere bursts. A second search looks
for neutrino-induced cascades with the 22-string con-
figuration of IceCube. Another
ν
µ
search is planned
for the 40-string configuration of IceCube, and its
status is presented here. This paper is a companion of
another ICRC IceCube contribution that summarizes
the IceCube 22-string northern hemisphere
ν
µ
GRB
search results and the expected capabilities of the
completed 86-string detector.
Keywords
: Gamma-Ray Bursts, Neutrinos, IceCube
I. INTRODUCTION
Gamma-ray Bursts (GRBs) have been proposed as
one of the most plausible sources of the highest energy
cosmic rays [1] and high energy neutrinos [2]. The
prevalent belief is that the progenitors of so called
long-
soft
GRBs are very massive stars that undergo core
collapse leading to the formation of a black hole.
Short-
hard
GRBs are believed to be the product of the merger
of binary compact objects such as neutrons stars and
black holes leading to the creation of a single black
hole. Material is ejected from the progenitor in ultra-
relativistic jets which then produce the observed burst
of
γ
-rays and accelerate particles, including baryons, to
high energy. Neutrinos are predicted to be produced in
multiple scenarios: while the jet burrows through the
envelope of the progenitor of a long-soft burst [3] (TeV
precursor), in coincidence with the observed
γ
-ray signal
[2] (prompt) and as the jet collides with interstellar
material or the progenitor wind in the early afterglow
phase [4] (EeV early afterglow.)
We use the Waxman-Bahcall model as a benchmark
for neutrino production in GRBs. The original calcula-
tion with this model used average GRB parameters as
measured by BATSE [2]. It was refined by including
specific details for individual GRBs [5]. Our neutrino
calculations follow this latter prescription. For many
GRBs the available information is incomplete. In that
case we use average parameters in the modeling of the
neutrino flux.
IceCube is a high energy (
E ? 1
TeV) neutrino
telescope currently under construction at the South Pole
[6]. The total instrumented volume of IceCube will
be
∼ 1km
3
. IceCube indirectly detects neutrinos by
measuring the Cherenkov light from secondary charged
particles produced in neutrino-nucleon interactions. A
total of 5160 Digital Optical Modules (DOMs) arranged
in 86 strings frozen in the ice are planned. The results
presented here correspond to the 22- and 40-string
configurations. AMANDA-II [7], IceCube’s predecessor
array, had an instrumented volume
≈ 60
times smaller
than that of the full IceCube. Searches of neutrinos
in coincidence with GRBs by AMANDA have been
reported with negative results [8], [9].
The two main channels for detecting neutrinos with
IceCube are the muon and the cascade channels. Charged
current interactions of
ν
µ
produce muons that, at TeV en-
ergies, travel for several kilometers in ice. For the muon
channel the detectors are mainly sensitive to up-going
muons as the Earth can be used to shield against the
much larger flux of down-going atmospheric muons. Be-
cause the neutrino-induced muon spectrum from GRBs
is expected to be much harder than cosmic-ray induced
muons GRB neutrino searches can be extended to the
southern hemisphere are shown in section III. Searches
for neutrinos from GRBs in the muon channel benefit
from good angular resolution (
∼ 1
◦
for
E
ν
> 1
TeV)
and from the long range of high energy muons. In
the cascade channel the detectors are sensitive to all
neutrino flavors through various interaction channels. In
this case almost all of the neutrino energy is deposited
in a narrow cylinder of O(10 m) in length; point-like
compared to IceCube dimensions. The cascade channel
analyses benefit from good energy resolution (
∼ 0.1
in
log
10
E
) and from 4
π
sr sensitivity. Complex event
topologies can also arise from
ν
τ
-induced events for
energies above
?
1 PeV [10].
2
K. MEAGHER
et al.
GRB SEARCHES WITH ICECUBE
II. SATELLITE TRIGGERED SEARCHES FOR
NEUTRINOS IN COINCIDENCE WITH GRBS
There are several methods for searching for neutrinos
from GRBs. The present contribution and its companion
[11] are
satellite triggered searches
. A list of GRB times
and sky localizations is obtained from satellites, such
as Swift, Fermi and others. From the perspective of
IceCube, the ideal GRB that is a source of neutrinos
has a high photon fluence, a well measured spectrum,
redshift and other electromagnetic properties and is lo-
calized with higher accuracy than the pointing resolution
of IceCube (
∼ 1
◦
for the completed detector). Therefore
wide field of view searches are preferable even at the
expense of reduced sensitivity. In that respect, Fermi is
the main source of GRBs expected to produce neutrinos.
Fermi started operations in summer 2008, before this
time, the main source of GRBs for study was Swift.
To avoid potential biases, all satellite triggered
searches are conducted using
blind
analysis methods.
The
on-time
window around each GRB is left unex-
amined, except for low level quantities that allow to
establish the stability of the detector. The length of the
on-time window depends on the analysis. The remainder
of the data collected by IceCube, or
off-time
window,
are used to measure the background experimentally. The
on-time window is studied (
unblinded
) only once the
analysis procedure has been fully established.
Searches for GRB neutrinos are performed if the
detector is determined to have been in a period of
stable operation according to general data requirements
developed and shared by the IceCube collaboration.
In addition the time difference between consecutive
events is calculated. At trigger level and for initial
event selection criteria the event rate in IceCube is
dominated by atmospheric muons produced in cosmic
ray showers. Given uncorrelated cosmic rays the time
difference between consecutive events is expected to fall
exponentially with time and the time constant should
correspond to the inverse of the detector event rate.
Finally a histogram of the frequency of number of
events in 10 s bins is fitted with a Gaussian distribution.
Deviations from a normal distribution, measured by a
reduced
χ
2
, indicate periods of high or low detector
event rate. Only GRBs corresponding to stable detector
periods are considered.
Neutrinos are simulated using an implementation of
the ANIS code [12] and atmospheric muons using
the CORSIKA air shower simulation package [13].
Propagation of neutrinos and muons through the Earth
and ice are performed with ANIS and MMC [14].
The photon signal in the DOMs is determined from a
detailed simulation [15] of the propagation of Cherenkov
light from muons and showers through the ice. This is
followed by a simulation of the DOM electronics and
the trigger. The DOM signals are then processed in the
same way as the data. The theoretical models tested have
been corrected to take into account neutrino oscillations.
III. ICECUBE 22-STRING SOUTHERN HEMISPHERE
MUON SEARCH
In this analysis we search for muon neutrinos emit-
ted in the prompt phase from GRBs in the southern
hemisphere (negative declination). We use filtered data
collected with the IceCube detector in its 22-string
configuration between May 2007 and April 2008 for
bursts with declination
> 40
◦
. Very low level data
taken within two hours of a burst trigger is used for those
with declination
< 40
◦
. In both cases, the data taken
±
20 minutes from the burst trigger is considered to be
the on-time window. Following the stability procedure
described in section II we find that two of the 42
southern hemisphere bursts do not pass the data quality
criteria or have missing data during the prompt emission
windows. For the remaining 40 GRBs, these tests show
no indications of abnormal behavior of the detector.
Tracks are reconstructed using a log-likelihood re-
construction method [16]. A fit of a paraboloid to
the region around the minimum in the log-likelihood
function yields an estimate of the uncertainty on the
reconstructed direction. Various quality parameters and
energy related parameters are derived from the results of
some other reconstructions discussed in [16] and [17].
The track quality and energy related variables are
combined using a machine learning algorithm. The algo-
rithm used was a Support Vector Machine (SVM) [18]
with a radial basis function kernel. One SVM was trained
for the filtered dataset after a loose preselection of events
and another was trained on the low level dataset. In
both cases, off-time background data is taken as the
background and all-sky neutrino simulation is used as
the signal. The result is an SVM classification between
-1 (background-like) and 1 (signal-like) for all events.
An unbinned likelihood method like the one described
in [19] was used to search each on-time window. This
method avoids using restrictive selection criterion to
throw away events but instead uses probability density
functions (PDFs) to evaluate whether events are more
likely to be signal or background. The signal,
S (?x
i
)
,
and background,
B(?x
i
)
, PDFs are each the product of a
time, a space, and an SVM PDF.
The space signal PDF is a two-dimensional Gaussian
determined from the paraboloid fit. The time PDF is flat
over the respective time window and falls off on both
sides with a Gaussian distribution with width equal to
the time window length. The SVM PDF is determined
from the SVM classifier distribution for simulated signal
events.
For the space background PDF the detector asymme-
tries in zenith and azimuth are taken into account by
evaluating the off-time data in the detector coordinate
system. The time distribution of the background during
a GRB is flat over the entire on-time window. The
SVM PDF is again determined from the SVM classifier
distribution of off-time background data.
All PDFs are combined in an extended log-likelihood
function [20] where the sum runs over all reconstructed
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
Fig. 1.
The calculated neutrino fluxes for all burst triggers taken
during IceCube’s 22-string operations in different declination bands.
Fig. 2. The 5
σ
50% MDF for each burst in the southern hemisphere
muon neutrino search.
tracks in the final sample. The variable
?n
b
?
is the
expected mean number of background events, which is
determined from the off-time data set. The mean number
of signal events,
?n
s
?
, is a free parameter which is varied
to maximize the expression
ln (R(?n
s
?)) = ? n
s
? +
?
N
i=1
ln
?
?n
s
? S
tot
(?x
i
)
?n
b
? B
tot
(?x
i
)
+1
?
(1)
in order to obtain the best estimate for the mean number
of signal events,
?n
ˆ
s
?
.
To determine whether a given data set is compatible
with the background-only hypothesis,
10
8
background
data sets for the on-time windows are generated from
off-time data by randomizing the track times while
taking into account the downtime of the detector. For
each of these data sets the
ln(R)
value is calculated.
The probability for a data set to be compatible with
background is given by the fraction of background data
sets with an equal or larger
ln(R)
value. The sensitivity
of each search is determined by injecting simulated
signal events into these randomizations and observing
the resultant
ln(R)
distribution. This allows for the
calculation of the Model Detection Factor (MDF) for
each analysis (figure 2). The MDF is the ratio of the
lowest signal fluence required for a detection with the
required significance and power to the predicted fluence
[21].
IV. ICECUBE 22-STRING CASCADE SEARCH
We are currently conducting a search for neutrino-
induced cascades in the prompt phase for 81 GRBs at
all declinations in coincidence with data from stable
detector periods (see section II for details) collected
with IceCube in its 22-string configuration. The on-time
period for this analysis is
±
1 hour. The off-time is
the remainder of the data collected by IceCube with
22 strings between May 2007 and April 2008 with a
livetime of
∼
269 days.
The analysis proceeds in three steps. First, a prelimi-
nary selection of cascade-like events is performed online
at the South Pole. Second, the South Pole filtered data
are reconstructed by minimizing log-likelihood functions
that take into account the propagation of photons through
ice from the source to the digital optical modules [16].
The reconstructions are performed for both a muon
hypothesis and a cascade hypothesis. The muon hy-
pothesis reconstruction provides a position, a direction,
time and several quality parameters that describe how
appropriately the muon hypothesis fits the data. The
cascade hypothesis reconstruction provides a candidate
neutrino interaction vertex, time, cascade energy and
quality parameters. After the reconstruction further se-
lection criteria are applied:
• L
µ
L
cascade
> 16.2
. The difference in the
log-likelihood quality parameters for the muon and
cascade reconstruction identifies events that are
better described by the cascade hypothesis.
• θ
µ
> 73
◦
. Events that match a down-going muon
are rejected. Here the
θ
µ
= 0
◦
represents a vertical
down-going muon.
• L
cascade
/(Nhit 5) < 8.0
. Cascade events that
are low energy or too far from the detector are re-
constructed poorly. We use the cascade hypothesis
reduced log-likelihood quality parameter to select
well reconstructed cascade events.
• N
1hit
/N
hit
< 0.1
. This quantity is a simple cas-
cade energy proxy because it is equivalent to the
surface to volume ratio of a spherical pattern of
light.
N
1hit
measures the number of DOMs in an
event that have only one hit (typically one photo-
electron),
N
hit
measures the total number of hits.
For the optimization of the selection criteria we are
currently using the Waxman-Bahcall spectrum [2] for
the expected
ν
e
+ ν¯
e
signal. After applying the selection
criteria described above, we expect 0.36 (
ν
e
+ ν¯
e
) from
81 GRBs. Because Swift is the main source of GRBs for
this analysis, we expect the typical GRB to be about one
order of magnitude dimmer than what was assumed for
the Waxman-Bahcall spectrum
1
. If a detailed per-burst
simulation is performed we expect a significantly lower
signal rate. After applying the selection criteria described
above
≈ 1.5 × 10
5
events remain in the off-time data.
1
The Waxman-Bahcall model assumed BATSE average GRB pa-
rameters, especially
z
GRB
= 1
, while Swift’s mean observed redshift
is significantly higher.
4
K. MEAGHER
et al.
GRB SEARCHES WITH ICECUBE
For the third and final part of the analysis we dis-
criminate signal from background with a neural net-
work that uses the parameters described above plus the
reconstructed energy of the cascade hypothesis and a
topological parameter that discriminates long (muon)
from spherical (cascade) events. A cut on the neural
network parameter provides the final discrimination be-
tween signal and background.
V. ICECUBE 40-STRING MUON SEARCH
IceCube began operating with 40 strings on April 5
2008 and continues to collect data in this configuration at
the time of writing. During this time IceCube remained
extremely stable and maintained a livetime of approx-
imately 95%. These additional strings give IceCube a
fiducial volume of approximately
0.5
km
3
making it
the largest neutrino detector to date. This section will
cover the analysis of the northern hemisphere bursts. An
analysis of the southern hemisphere bursts will follow.
To date there have been 116 northern hemisphere
GRBs reported via GCN circulars during 40-string op-
erations. The launch of the Fermi Gamma-Ray Space
Telescope with the Gamma-Ray Burst Monitor (GBM)
has greatly increased the number of bursts available for
analysis. However, the GBM bursts are usually poorly
localized and have 1 sigma uncertainties spanning from
1 to 15 degrees. In addition there are several bursts
detected by other satellites of the InterPlanetary Net-
work (IPN), including the brightest burst in the sample,
GRB080408B, which result in a total of 48 bursts with
localization uncertainties of larger than one degree. In
order to search regions of the sky larger than IceCube’s
angular resolution of approximately 1.5 degrees, new
methods must be utilized. Expanding on the unbinned
likelihood analysis presented in section III, an extended
source hypothesis must be created that takes into account
both the GRB’s localization error and IceCube’s angular
uncertainty:
S
space
(?x
i
)=
1
N
?
4π
d
Ω · e
( ?
r ?
rγ )2
2 σγ
+
( ?
r ?
rν )2
2 σν
(2)
where
?r
γ
and
σ
γ
are the location and uncertainty of the
GRB as reported in the GCN circular,
?r
ν
and
σ
ν
are the
uncertainty of the IceCube neutrino candidate, and
N
is
a normalization.
Sensitivity studies are currently being performed and
will be available soon.
VI. CONCLUSIONS
Satellite triggered searches for neutrinos in coinci-
dence with GRBs use many common techniques. The
southern hemisphere
ν
µ
search is a first attempt to ex-
tend IceCube’s sensitivity to GRBs into the higher back-
ground region above the horizon. The cascade search
provides sensitivity to all neutrino flavors over 4
π
sr.
The 40-string search provides greater sensitivity due to
IceCube’s growing effective area and greater number of
burst triggers from Fermi.
10
2
10
3
10
4
10
5
10
6
10
7
10
8
10
9
E
[GeV]
10
-16
10
-14
10
-12
10
-10
10
-8
10
-6
10
-4
10
-2
10
0
E
2
✁
F
[
✂
GeVcm
2
]
Total Waxman & Bahcall
Avg Waxman & Bahcall
Total Individual Spectra
Fig. 3. The Calculated Neutrino Spectrum for 102 of the 116 northern
hemisphere bursts for which spectral information was available. The
Sum of the Neutrino spectrum is plotted along with the Average
Waxman and Bahcall spectrum for a single burst and for 102 bursts.
10
2
10
3
10
4
10
5
10
6
10
7
10
8
10
9
E
✄
[GeV]
10
-4
10
-3
10
-2
10
-1
10
0
10
1
10
2
10
3
10
4
10
5
10
6
A
eff
[
m
2
]
☎☎☎☎☎☎
100000.
.
.
.
.
.
086420 <<<<<<CCCCCCoooooossssss((((((
✆✆✆✆✆✆
)))))) <<<<<<
☎☎☎☎☎
+000000
.
.
.
.
.
.
864202
Fig. 4. Effective Area of 40 string IceCube.
REFERENCES
[1] E. Waxman,
Phys. Rev. Lett.
, vol. 75, p. 386, 1995.
[2] E. Waxman and J. N. Bahcall,
Phys. Rev. Lett.
, vol. 78, p. 2292,
1997.
[3] S. Razzaque, P. Meszaros, and E. Waxman,
Phys. Rev.
, vol. D68,
p. 083001, 2003.
[4] E. Waxman and J. N. Bahcall,
ApJ
, vol. 541, p. 707, 2000.
[5] D. Guetta
et al.
,
Astropart. Phys.
, vol. 20, p. 429, 2004.
[6] A. Karle, (IceCube Coll.)
et al.
, in
Proc. International Cosmic
Ray Conference (ICRC’09)
, Lodz´, Poland, 2009.
[7] E. Andre´s, (AMANDA Coll.)
et al.
,
Nature
, vol. 410, p. 441,
2001.
[8] A. Achterberg, (IceCube Coll.)
et al.
,
ApJ
, vol. 664, p. 397, 2007.
[9] ——,
ApJ
, vol. 674, p. 357, 2008.
[10] T. Deyoung, S. Razzaque, and D. F. Cowen,
Astroparticle
Physics
, vol. 27, pp. 238–243, Apr. 2007.
[11] A. Kappes, P. Roth, E. Strahler, (IceCube Coll.)
et al.
, in
Proc.
International Cosmic Ray Conference (ICRC’09)
, Lodz´, Poland,
2009.
[12] A. Gazizov and M. O. Kowalski,
Comp. Phys. Comm.
, vol. 172,
p. 203, 2005.
[13] D. Heck
et al.
,
Technical Report FZKA
, vol. 6019, 1998.
[14] D. Chirkin and W. Rhode, “hep-ph/0407075.”
[15] J. Lundberg
et al.
,
Nucl. Inst. Meth.
, vol. A581, p. 619, 2007.
[16] J. Ahrens, (AMANDA Coll.)
et al.
,
Nucl. Inst. Meth.
, vol. A524,
p. 169, 2004.
[17] J. Zornoza, D. Chirkin, (IceCube Coll.)
et al.
, in
Proc. Interna-
tional Cosmic Ray Conference (ICRC’07)
, Merida, Mexico, Aug.
2007.
[18] C. Cortes and V. Vapnik,
Machine Learning
, vol. 20, no. 3, p.
273, 1995.
[19] J. Braun, J. Dumm, F. de Palma, C. Finley, A. Karle, and
T. Montaruli,
Astropart. Phys.
, vol. 29, p. 299, 2008.
[20] R. J. Barlow,
Statistics
. Wiley, 1989.
[21] G. C. Hill, J. Hodges, B. Hughey, A. Karle, and M. Stamatikos,
in
Proc. PHYSTAT O5: Statistical Problems in Particle Physics
,
Oxford, United Kingdom, Sep 2005.
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
Search for GRB neutrinos via a (stacked) time profile analysis
Martijn Duvoort
?
and Nick van Eijndhoven
?
for the IceCube collaboration
y
?
University of Utrecht, The Netherlands
y
see special section of these proceedings
Abstract. An innovative method to detect high-
energy neutrinos from Gamma Ray Bursts (GRBs) is
presented. The procedure provides a good sensitivity
for both prompt, precursor and afterglow neutrinos
within a 2 hour time window around the GRB
trigger time. The basic idea of the method consists
of stacking of the observed neutrino arrival times
with respect to the corresponding GRB triggers. A
possible GRB neutrino signal would manifest itself
as a clustering of signal candidate events in the
observed time profile. The stacking procedure allows
to identify a signal even in the case of very low rates.
We outline the expected performance of analysing
four years of AMANDA data (2005–2008) for a
sample of 130 GRBs. Because of the extreme optical
brightness of GRB080319B, it might be that this
particular burst yielded multiple detectable neutrinos
in our detector. As such, the method has also been
applied to the data of this single burst time profile.
The results of this analysis are presented in a separate
section.
Keywords: GRB Neutrinos AMANDA/IceCube
I. INTRODUCTION
Gamma Ray Bursts are among the most promising
sources for high-energy neutrino detection: the accurate
localization and timing information presently available,
enable very effective background reduction for high-
energy neutrino detectors. Yet, no previous search for
GRB neutrinos has led to a discovery [1], [2]. As most
models of a GRB jet predict neutrino formation simul-
taneous with the prompt ? emission, previous analyses
aim to discover neutrinos that arrive simultaneously with
the prompt photons. However, it might be that the main
GRB neutrino signal is not simultaneous with the prompt
gammas, either in production or arrival at the Earth.
A variety of the models predict the formation of high-
energy neutrinos at different stages in the evolution of
a GRB. Afterglow models predict a significant neutrino
flux a few seconds after the prompt emission [3], [4].
The existence of multiple colliding shells in a GRB
jet [5] may also lead to a time difference between
high-energy gamma emission and neutrinos. Even if the
neutrinos and photons are produced at the same stage
of the evolution of the GRB jet, a time difference at
the observer may be present: as the jet evolves, it will
become transparent for photons at a later stage than for
neutrinos. Therefore, neutrinos might be able to escape
the source region well before the high-energy photons.
This will depend heavily on the actual stage in the
evolution of the jet.
For our analysis we use the data of the AMANDA-
II detector at the South Pole [6] to look for a neutrino
signal. Our analysis method is aimed to be less model
dependent than previous GRB analyses. It is insensitive
to a possible time difference between the arrival of the
prompt photons and the high-energy neutrino signal. We
limit the dependence on the expected neutrino spectrum
by not using any energy dependent selection criteria.
We only use directional selection parameters based on
the reconstructed muon track, resulting from an incident
muon neutrino [6]. As the detectable number of signal
neutrinos in our detector per GRB is very low [7]
(˝ 1), our method is designed to allow for gaining
sensitivity to a GRB neutrino signal by stacking neutrino
data of multiple GRBs around their trigger time. Those
stacked time profiles can be analysed using the same
techniques as the time profile of a single GRB. We
first outline the analysis method itself, then we give the
results of applying this method to GRB080319B, the
most luminous GRB observed to date.
II. THE ANALYSIS METHOD
We look for signal events correlated with the GRB
direction and time. As the background of our detector,
which consists of cosmic ray events, is not correlated,
we start by filtering the data for a GRB coincidence, both
spatially and temporally. The exact selection parameters
we use are optimized as outlined in section III.
The GRB data that passes the cuts has a certain time-
distribution with respect to the GRB trigger time. The
background events that pass the cuts will be uniformly
distributed in time with respect to the GRB trigger. A
possible GRB signal will be clustered in time. Note that
this argument also holds for the case of stacking multiple
GRB time windows, which is the main purpose of this
analysis method. Here we assume that the intrinsic time
difference between photons and neutrinos is a charac-
teristic feature for all GRBs in our sample. Obviously
we aim to have all GRB signal neutrinos ending up in
the same time-bin. Therefore, the usage of a too small
time bin will reduce the sensitivity as signal entries will
end up in different bins. Using a too large time bin
also reduces the sensitivity as background entries will
start to dominate the bins. We estimate the timespread
of the neutrino signal to be of the same order of the
observed photonic GRB duration: the T
90
time, defined
as the time in which 5% 95% of the GRB fluence
was detected. This is a safe estimate as the intrinsic
2
DUVOORT et al. GRB NEUTRINO SEARCH USING TIME PROFILE ANALYSIS
Fig. 1. The ? distribution for randomizing 13 entries in 120 bins.
(108 randomizations)
timespread of the neutrino signal will not be larger than
that of the photons: as the source region will always be
more opaque for photons than for neutrinos, the photon
signal will spread more in time than the neutrino signal.
We have chosen a conservative bin size of 60 s, resulting
in 120 time bins in our 2 hour window.
The probability of observing a certain time distribu-
tion given a uniform background distribution, of in total
n entries divided over m time bins, is given by the
multinomial distribution [8]:
p(n
1
; n
2
; :::n
m
jnm) =
n!
n
1
! ? ? ? n
m
!
p
n
1
1
???p
n
m
m
? p: (1)
Here p
i
is the probability of an entry ending up in bin
i. In case of a uniform background this is simply m
1
.
The n
i
represents the number of entries in bin i. We
derive the bayesian ? ? 10 log p [9]:
?= 10
"
log n! +
X
m
k=1
(n
k
log p
k
log n
k
!)
#
: (2)
If the observation is due to the expected background,
a low ? value will be obtained. Deviations from the
expected background will result in increased ? values.
We intend to compare the ? value of the observed
data, including a possible signal, with the distribution
of uniform background ? values. We obtain such back-
ground sets by (uniformly) randomizing the entries in
the two hour time window, keeping the total number
of entries constant to what we find in the data. In
case of a large signal contribution, this may result in
underestimating the significance of the signal. However,
for such a high signal contribution we will be able to
claim discovery anyway. To claim a discovery we require
at least a 5˙ level, which means that only a fraction of
5:73?10
7
(the corresponding P-value) of all the ?s of
the various background sets is allowed to exceed some
threshold ?
0
. In case the ? value of our observed data
is larger than ?
0
, we have a discovery.
In order to reach the necessary accuracy, we perform
10
8
randomizations of all the data events that pass the
criteria and calculate the ? value of each randomization
to obtain a background ? distribution. In figure 1 we
give one example of our parameter space. Here n = 13
entries exist in our simulated observation time window
of 120 bins.
Fig. 2. The time distribution of 3 signal events (at t = 0) and 10
randomly distributed events in a 2 hour window using 60 s bin size.
As an example, one might observe a time distribu-
tion, consisting of 3 signal events in a single bin and
10 randomly distributed background entries, which is
shown in figure 2. The ? value associated with this
distribution equals 186:15. When comparing with the
background ? distribution of figure 1 it becomes clear
that this corresponds to a P-value of 1:13 ? 10
3
above
the observed ?
0
= 186:15. For a 5˙ discovery we need
this fraction to be less than 5:73 ? 10
7
. Therefore,
observing a time profile like figure 2 will not result in
a significant discovery.
III. OPTIMIZATION OF THE SELECTION PARAMETERS
The significance of our observation is determined by
the method outlined in section II. Before we do this
we need to optimize the directional parameter values
which we use for selecting the final event sample by
means of a blind analysis. In order to stay comparable
to previous analyses, we will use the standard Model
Discovery Factor (MDF) [10], [11] to determine the
optimum of our parameter space. At those optimal
settings the standard Model Rejection Factor (MRF) [12]
is calculated.
The average expected number of background counts
per time bin ?
b
is calculated by simply dividing the
total number of observed entries in the time window
by the number of time bins. This is justified by the
assumption that the expected signal is much smaller
than the background ?
b
˛?
s
. We optimize the selection
parameters for a 5˙ discovery. The significance we use
in the calculation of the MDF is corrected for a trial
factor due to the number of bins.
By systematically going through the grid of our
parameter space, we reach the parameter values cor-
responding to a minimum MDF, i.e. we optimize our
analysis for discovery. In case of no discovery, the MRF
at these settings will provide a flux upper limit. Since we
optimize our parameters on the randomized data itself,
our background set consists of randomized background
plus signal entries. For parameter settings where less
than four entries pass, the ? statistics cannot result in
a discovery: all possible P-value exceed 5:73 ? 10
7
.
Therefore, we require that at our optimal thresholds,
at least four events pass our filter. This is achieved by
slightly relaxing the selection criteria.
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
IV. THE GRB080319B AMANDA ANALYSIS
Even though the expected number of signal neutrinos
for an average GRB is extremely low, the atypical
GRB080319B might yield an unusually strong neutrino
signal justifying an individual neutrino analysis. The
analysis of IceCube data [13] was confined to 10 minutes
around the GRB trigger time. No neutrino signal was
found. We analyse a larger data block from one hour
before till one hour after the trigger time, and use the
same spectrum as in [13] for our optimization and limit:
dN
?
dE
?
=
8
<
:
6:620 ? 10
16
? E
0:59
?
if E
?
? E
1
;
0:768 ? E
2:145
?
if E
1
? E
?
? E
2
;
6:690 ? E
4:145
?
if E
?
? E
2
;
(3)
with the fluence, dN
?
=dE
?
, in (GeV cm
2
)
1
and the
break energies: E
1
= 322:064 TeV; E
2
= 2952:35 TeV.
For our time profile we use a bin size of 60 s,
roughly the T
90
of this burst. While we expect this to be
wide enough for the GRB neutrino signal to fit in one
bin, a possible neutrino signal can be spread over two
adjacent bins. This obviously lowers the significance of
the observation. Therefore, in case we do not find a 5˙
result with our initial analysis, we compensate for this
binning effect by performing our analysis a second time,
where we shift our bins by half a binwidth.
Using a simulated neutrino fluence following the
spectrum (3), we obtain the optimum of our parameter
space following the method of section III. We find at
the optimal parameter settings a 5˙ MDF of 123:65
and have six events passing the filter. Based on the
GRB spectrum we expect 0:064 signal entries to pass
the filter. We find (at 90% confidence level) an MRF of
38:8 for the GRB spectrum. Likewise, using a generic
E
2
spectrum, we obtain at these settings a limit of
E
2
dN
?
=dE
?
= 1:11 ? 10
2
(GeV cm
2
s)
1
at 90%
confidence level. Note that these limits are conservative
as the ? statistics we use to claim discovery is more
sensitive than the Poisson statistics on which the MRF
is based.
The previous analysis of IceCube data [13] quotes
a sensitivity at a fluence of 22:7 times the expected
spectrum at 90% C.L. for prompt emission. We find a
90% C.L. limit at 38:8 times the expected spectrum for a
neutrino signal arriving in the central bin. This difference
Fig. 3. The 90% C.L. upper limits on the fluence of GRB080319B
with respect to the calculated neutrino fluence (3) for both this analysis
and the IceCube analysis in its 9 string configuration.
can be seen in figure 3, where the limits of both analyses
are given.
The neutrino effective areas for the AMANDA detec-
tor for this analysis are given in figure 4. It is given at
Fig. 4. The neutrino effective area for the position of GRB080319B,
both at trigger level and at final cut level.
both trigger level and at the level where all our selections
have been applied. The ratio between the 2 histograms is
the signal passing rate, which, for the GRB spectrum (3),
equals 49:4%.
The time profile we find after unblinding is consistent
with the background-only hypothesis. Repeating the
analysis with shifted time bins does not change this.
V. THE STACKING ANALYSIS
In this section we present the expected results of
analysing the stacked AMANDA data of 130 GRBs
between 2005 and 2008. These well-localized bursts are
all in the Northern hemisphere to reduce the background
due to atmospheric events. The time profile of each GRB
is sampled to form a stacked time profile.
Due to the different redshifts of the GRBs in the
sample, the effect of cosmological time dilation on the
intrinsic time difference between photons and neutrinos
will result in a timespread on the arrival of the neutrino
signal. This spread will increase for larger time differ-
ences. We compensate for this by enlarging the bin sizes
for bins further away from the trigger. Each bin will
be enlarged by a factor of hzi + 1, where hzi is the
average redshift of the GRBs in our sample. We choose
to have our central bin range from h T
90
i ˇ 30 s
to hT
90
i ˇ 30 s, allowing for a scatter in the neutrino
arrival time of the average length of the photon signal.
The second bin is a factor of hzi+ 1 ˇ 3 larger than the
maximum scatter we allow in the center bin and ranges
[30; 120] s (and [ 120; 30] s). The next bin is again a
factor of 3 larger.
The fact that the bins in our time window have
unequal sizes does not influence our method. It is simply
taken into account by using, for each bin, the correct
p
i
, the probability for an entry to fall in that bin, see
equation (2). Let us consider the same time profile as
above (figure 2) with these new bin settings. This leads
to the time profile as given in figure 5. Because our
time window now has variable binning, the configuration
itself changed significantly with respect to the regular
4
DUVOORT et al. GRB NEUTRINO SEARCH USING TIME PROFILE ANALYSIS
Fig. 5.
The time distribution of 3 signal events and 10 randomly
distributed events in a 2 hour window. Here we use nine variable time
bins as explained in the text.
case of figure 2. Hence the new ? value of our obser-
vation (63:44) differs from the previously found value.
The background ? distribution of figure 1 will change
accordingly. Following our example, one can study the
n
signal
P-value
P-value
regular 60 s bins
variable bins
1
6:32 ? 10
2
2:36 ? 10
1
2
9:67 ? 10
3
3:77 ? 10
2
3
1:13 ? 10
3
3:14 ? 10
3
4
2:71 ? 10
5
1:80 ? 10
4
5
1:1 ? 10
6
7:8 ? 10
6
6
1 ? 10
8
2:2 ? 10
7
TABLE I
COMPARISON BETWEEN THE SIGNIFICANCE FOR THE CASE OF THE
TIME PROFILE OF FIGURE 2 AND THE SAME SITUATION USING THE
VARIABLE BINS AS IN FIGURE 5. HERE WE VARY THE AMOUNT OF
SIGNAL ENTRIES IN THE CENTER BIN n
signal
; THE 10
BACKGROUND ENTRIES ARE LEFT UNTOUCHED.
effect of the variable binning on the significance of our
time profile for different signal strengths. From table I
one can see that introducing variable bin sizes slightly
lowers the significance of our observations (their P-
value) for a signal falling in the center bin.
For the optimization of the selection parameters we
use both a Waxman-Bahcall and a generic E
2
spec-
trum. Again, we optimize for discovery using the stan-
dard MDF. As a result from the various binsizes, the
limit of this analysis depends on the bin size, and
therefore depends on the time difference with the GRB
trigger. The sensitivity of this analysis for each bin in our
Time range
WB spectrum at 1 PeV
E
2
dN
?
=dE
?
w.r.t. GRB trigger
(GeV cm2 s sr)
1
(GeV cm2 s sr)
1
[ 30; 30] s
2:9 ? 10
8
1:55 ? 10
8
? [30; 120] s
3:0 ? 10
8
1:58 ? 10
8
? [120; 390] s
3:3 ? 10
8
1:76 ? 10
8
? [390; 1200] s
4:2 ? 10
8
2:24 ? 10
8
? [1200; 3600] s
5:8 ? 10
8
3:09 ? 10
8
TABLE II
THE 90% C.L. SENSITIVITY OF THE STACKING ANALYSIS FOR
EACH TIME BIN, FOR BOTH THE WAXMAN-BAHCALL (WB) AND A
GENERIC E
2
SPECTRUM.
time window are given in table II. For the central bin we
Fig. 6. The sensitivity of this analysis for both a generic E
2
and
a Waxman-Bahcall (WB) source spectrum (90% C.L.).
have shown the limits in figure 6. Note that these limits
only apply to a neutrino signal arriving simultaneously
with the prompt photon emission.
VI. DISCUSSION
Currently, the most restrictive muon neutrino up-
per limit has been determined by AMANDA at
E
2
dN
?
=dE
?
? 1:7 ? 10
8
GeV cm
2
s
1
sr
1
based
on a sample of over 400 GRBs and for the Waxman-
Bahcall spectrum at 1 PeV [2]. For our analysis no
energy dependent selection parameters are used and the
optimum of the selection parameters is independent of
the source spectrum we use. As such, our analysis is
less model dependent and it allows for a possible time
difference between photons and neutrinos. Furthermore,
the stacking procedure provides sensitivity even in the
case of very low individual GRB rates. As such, the
present analysis has the potential of detecting precursor
and afterglow neutrinos in addition to prompt ones.
The method may also be used to analyse the data of
individual GRBs. By construction our method is slightly
less sensitive compared to a model dependent analysis
of a single time bin. The effective area of the complete
IceCube detector will be at least ˘150 times larger than
AMANDA’s [14]. Applying our analysis on one year
data of the full IceCube, would result in a sensitivity
well below the predicted Waxman-Bahcall spectrum.
REFERENCES
[1] Achterberg, A., et al. 2008, Astrophys. J. 674, 357
[2] Achterberg, A., et al. 2007, Astrophys. J. 664, 397
[3] Waxman, E., & Bahcall, J. N. 2000, Astrophys. J., 541, 707
[4] Dai, Z. G., & Lu, T. 2001, Astrophys. J. 551, 249
[5] Dar, A., & De Rujula, A. 2001, arXiv:astro-ph/0105094
[6] Ahrens, J., et al. 2004, Nucl. Instr. Meth. A 524, 169
[7] Halzen, F., & Hooper, D. W. 1999, Astrophys. J. 527, L93
[8] Gregory, P. C. 2005, Bayesian Logical Data Analysis for the
Physical Sciences, Cambridge University Press, UK, 2005
[9] van Eijndhoven, N. 2008, Astropart. Phys. 28, 540
[10] Punzi, G. 2003, Statistical Problems in Particle Physics, Astro-
physics and Cosmology, 79
[11] Hill, G. C., Hodges, J., Hughey, B., Karle, A., & Stamatikos, M.
2006, Statistical Problems in Particle Physics, Astrophysics and
Cosmology, 108
[12] Hill, G. C., & Rawlins, K. 2003, Astropart. Phys. 19, 393
[13] IceCube Collaboration: R. Abbasi 2009, arXiv:0902.0131
[14] Ahrens, J., et al. 2004, Astropart. Phys. 20, 507
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Optical follow-up of high-energy neutrinos detected by IceCube
Anna Franckowiak
∗
, Carl Akerlof
§
, D. F. Cowen
∗†
, Marek Kowalski
∗
, Ringo Lehmann
∗
,
Torsten Schmidt
‡
and Fang Yuan
§
for the IceCube Collaboration
¶
and for the ROTSE Collaboration
∗
Humboldt Universita¨t zu Berlin
†
Pennsylvania State University
‡
University of Maryland
§
University of Michigan
¶
see the special section in this proceedings
Abstract
. Three-quarters of the 1 km
3
neutrino
telescope IceCube is currently taking data. Current
models predict high-energy neutrino emission from
transient objects like supernovae (SNe) and gamma-
ray bursts (GRBs). To increase the sensitivity to such
transient objects we have set up an optical follow-
up program that triggers optical observations on
multiplets of high-energy muon-neutrinos. We define
multiplets as a minimum of two muon-neutrinos from
the same direction (within 4
◦
) that arrive within a
100 s time window. When this happens, an alert
is issued to the four ROTSE-III telescopes, which
immediately observe the corresponding region in
the sky. Image subtraction is applied to the optical
data to find transient objects. In addition, neutrino
multiplets are investigated online for temporal and
directional coincidence with gamma-ray satellite ob-
servations issued over the Gamma-Ray Burst Coor-
dinate Network. An overview of the full program is
given, from the online selection of neutrino events to
the automated follow-up, and the resulting sensitivity
to transient neutrino sources is presented for the first
time.
Keywords
: Neutrinos, Supernovae, Gamma-Ray
Bursts
I. INTRODUCTION
When completed, the in-ice component of IceCube
will consist of 4800 digital optical modules (DOMs)
arranged on 80 strings frozen into the ice, at depths
ranging from 1450m to 2450m [1]. Furthermore there
will be six additional strings densely spaced at the
bottom half of the detector. The total instrumented
volume of IceCube will be 1 km
3
. Each DOM contains
a photomultiplier tube and supporting hardware inside a
glass pressure sphere. The DOMs indirectly detect neu-
trinos by measuring the Cherenkov light from secondary
charged particles produced in neutrino-nucleon interac-
tions. IceCube is most sensitive to neutrinos within an
energy range of TeV to PeV and is able to reconstruct
the direction of muon-neutrinos with a precision of
∼
1
◦
.
The search for neutrinos of astrophysical origin is among
the primary goals of the IceCube neutrino telescope.
Source candidates include galactic objects like super-
nova remnants as well as extragalactic objects like
Fig. 1: Neutrino event spectrum in the IceCube detector,
from kaon and pion decay in the supernovae-jet model
of Ando and Beacom [5].
Active Galactic Nuclei and Gamma-Ray Bursts [9] [10].
Offline searches for neutrinos in coincidence with GRBs
have been performed on AMANDA and IceCube data.
They did not lead to a detection yet, but set upper
limits to the predicted neutrino flux [13]. While the
rate of GRBs with ultra-relativistic jets is small, a much
larger fraction of SNe not associated with GRBs could
contain mildly relativistic jets. Such mildly relativistic
jets would become stalled in the outer layers of the
progenitor star, leading to essentially full absorption of
the electromagnetic radiation emitted by the jet. Hence,
with the postulated presence of mildly relativistic jets
one is confronted with a plausible but difficult-to-test hy-
pothesis. Neutrinos may reveal the connection between
GRBs, SNe and relativistic jets. As was recently shown,
mildly relativistic jets plowing through a star would be
highly efficient in producing high-energy neutrinos [5]–
[7]. The predicted neutrino spectrum follows a broken
power law and Fig. 1 shows the expected signal spectrum
for neutrinos produced in kaon and pion decay in the
source, simulated using the full IceCube simulation
chain. The expected number of signal events is small
and requires efficient search algorithms to reduce the
background of atmospheric neutrinos (see section II).
An optical follow-up program has been started which
enhances the sensitivity for detecting high-energy neutri-
nos from transient sources such as SNe. In this program,
the direction of neutrinos are reconstructed online, and
if their multiplicity pass a certain threshold, a Target-
2
AUTHOR
et al.
PAPER SHORT TITLE
of-Opportunity (ToO) notice is sent to the ROTSE-III
network of robotic telescopes. These telescopes monitor
the corresponding part of the sky in the subsequent
hours and days and identify possible transient objects,
e.g. through detection of rising supernova light-curves
lasting several days. If in this process a supernova is
detected optically, one can extrapolate the lightcurve or
afterglow to obtain the explosion time [2]. For SNe,
a gain in sensitivity of about a factor of 2-3 can be
achieved through optical follow-up observations of neu-
trino multiplets [4]. In addition to the gain in sensitivity,
the follow-up program offers a chance to identify its
transient source, be it a SN, GRB or any other transient
phenomenon.
II. NEUTRINO ALERT SYSTEM
IceCube’s optical follow-up program has been operat-
ing since fall of 2008. In order to match the requirements
given by limited observing time at the optical telescopes,
the neutrino candidate selection has been optimized to
obtain less than about 25 background multiplets per year.
The trigger rate of the 40 string IceCube detector is about
1000 Hz. The muon filter stream reduces the rate of
down-going muons created in cosmic ray showers dra-
matically by limiting the search region to the Northern
hemisphere and a narrow belt around the horizon. The
resulting event stream of 25 Hz is still dominated by
misreconstructed down-going muons. Selection criteria
based on on track quality parameters, such as number
of direct hits
1
, track length and likelihood of the recon-
struction, yield a reduced event rate of 1 event/(10 min).
The optimized selection criteria are relaxed to improve
the signal efficiency, 50% of the surviving events are
still misreconstructed down-going muons, while 50%
are atmospheric neutrinos. During the antarctic summer
2008/2009, 19 additional strings were deployed, which
have been included in the data taking since end of
April 2009. To take into account an enhancement in the
rate due to the increased detector volume, the selection
criteria have been adjusted and will yield a cleaner event
sample containing only 30% misreconstructed muons.
From this improved event sample, neutrino multiplet
candidates with a time difference of less than 100 s
and with an angular difference (or ’space angle’) of less
than 4
◦
are selected. The choice of the time window
size is motivated by jet penetration times. Gamma-ray
emission observed from GRBs has a typical length of
40 s, which roughly corresponds to the duration of
a highly relativistic jet to penetrate the stellar enve-
lope. The angular difference is determined by IceCube’s
angular resolution. Assuming single events from the
same true direction, 75% of all doublets are confined
to a space angle of 4
◦
after reconstruction. Once a
multiplet is found, a combined direction is calculated
as a weighted average of the individual reconstructed
1
Hits that are measured within
[-15ns
,75ns] from the predicted
arrival time of Cherenkov photons, without scattering, given by the
track geometry.
event directions, with weights derived from the estimated
direction resolution of each track. The resolution of the
combined direction is up to a factor of
1/√
2
better
than that of individual tracks. The multiplet direction
is sent via the network of Iridium satellites from the
South Pole to the North, where it gets forwarded to the
optical telescopes. At this point in time, due to limited
parallelization of the data processing at the South Pole a
delay of 8 hours is accumulated. In the near future, the
online processing pipeline will be upgraded, reducing
the latency drastically to the order of minutes.
A total of 14 alerts have passed the selection criteria and
were sent to the telescopes within 7 months of operation.
III. OPTICAL FOLLOW-UP OBSERVATIONS
At the moment IceCube alerts get forwarded to
the Robotic Optical Transient Search Experiment
(ROTSE) [3]. Additions to the list of participating tele-
scopes are planned. ROTSE-III is dedicated to observa-
tion and detection of optical transients on time scales of
seconds to days. The original emphasis was on GRBs
while it more recently has also started a very successful
SN program. The four ROTSE-III telescopes are in-
stalled around the world (in Australia, Namibia, the USA
and Turkey). The ROTSE-III equipment is modest by
the standards of modern optical astronomy, but the wide
field of view and the fast response permit measurements
inaccessible to more conventional instruments. The four
0.45 m robotic reflecting telescopes are managed by a
fully-automated system. They have a wide field of view
(FOV) of
1.85
◦
× 1.85
◦
imaged onto a 2048
×
2048
CCD, and operate without filters. The cameras have a
fast readout cycle of 6 s. The limiting magnitude for a
typical 60 s exposure is around 18.5 mag, which is well-
suited for a study of GRB afterglows during the first
hour or longer. The typical full width at half maximum
(FWHM) of the stellar images is smaller than 2.5 pixels
(8.1 arcseconds). Note that ROTSE-IIIs FOV matches
the size of the point spread function of IceCube well.
Once an IceCube alert is received by one of the tele-
scopes, the corresponding region of the night sky will
be observed within seconds. A predefined observation
program is started: The prompt observation includes
thirty exposures of 60 seconds length
2
. Follow-up
observations are performed for 14 nights. Eight images
with 60 seconds exposure time are taken per night. The
prompt observation is adjusted to the typical rapidly de-
caying lightcurve of a GRB afterglow, while the follow-
up observation of 14 days permits the identification of
an increasing SN lightcurve. Once the images are taken,
they are automatically processed at the telescope site.
Once the data is copied from the telescopes, a second
analysis is performed off-line, combining the images
from all sites. Image subtraction is performed according
2
Once the delay caused by data processing at the South Pole (see
section II) is reduced to the order of minutes, the prompt observation
will include ten short observations of 5 seconds, ten observations of
20 seconds and twenty long exposures of 60 seconds.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
the methods presented in [8]. Here the images of the
first night serve as reference, while the images from the
following nights are used to search for the brightening
of a SN lightcurve.
IV. SENSITIVITY
The sensitivity of the optical-follow up program is
determined by both IceCube’s sensitivity to high energy
neutrino multiplets and ROTSE-III’s sensitivity to SNe.
We will distinguish two cases: The first being that no
optical counterpart is observed over the course of the
program (assuming 25 alerts per year) and the second
that a SN is identified in coincidence.
A. No Optical Counterpart Discovered
With no coincident SN observed, one obtains an upper
limit on the average number of SNe that could produce
a coincidence:
N
IC
ROTSE
< 2.44
(for 90% confidence
level). Constraints on a given model are obtained by de-
manding that the model does not predict a number in ex-
cess of the SN event upper limit. We construct a simple
model based on Ando & Beacom type SNe [5]. We in-
troduce two parameters: The first is the rate of SNe pro-
ducing neutrinos
ρ = (4/3π)
1
10
4
ρ
SN
2 e 4
Mpc
3
yr
1
.
Note that
ρ
SN
2 e 4
= 1
corresponds to one SN per year
in a 10 Mpc sphere, about the rate of all core-collapse
SNe in the local Universe [12]. Since we expect only
a subset of SNe to produce high energy emission, one
can assume
ρ
SN
2 e 4
< 1
. The second parameter is the
hadronic jet energy
E
jet
= 3 · 10
51
ǫ
jet
3 e51
ergs
and we
choose to scale the flux normalization of the model
of Ando & Beacom,
F
0
, by
ǫ
jet
3 e51
. Fig. 2 shows the
constraints that one can place on the density and jet
kinetic energy in the
E
jet
ρ
plane. The basic shape of
the constraints that can be obtained in the
E
jet
ρ
plane
can be understood from the following considerations.
The number of neutrinos depends on the jet energy and
the distance:
N
ν
∝ ǫ
jet
3 e51
· r
2
.
The program requires
at least
N
ν,min
= 2
detected neutrinos in IceCube. A
SN with jet energy
ǫ
jet
3 e51
produces
N
ν,min
neutrinos if
it is closer than
r
max
:
N
ν,min
∝ ǫ
jet
3 e51
· r
2
max
,
which
yields
r
max
∝ (ǫ
jet
3 e51
)
1/2
. The volume
V
limited by
r
max
contains
N
SN
∝ ρ
SN
2 e 4
· r
3
max
SN that can pro-
duce two neutrinos. Therefore the number of detection
N
SN
IC/ROTSE
is given by
N
SN
IC/ROTSE
∝ ρ
SN
2 e 4
·(ǫ
jet
3 e51
)
3/2
.
For normalization we use Ando & Beacom-like SNe,
which occur at a rate of
ρ
SN
2 e 4
= 1
with GRB-like
energies (
ǫ
jet
3 e51
= 1
) and yield
N
SN,AB
IC/ROTSE
= 200
expected IceCube/ROTSE coincidences per year.
N
SN
IC/ROTSE
= N
SN,AB
IC/ROTSE
ρ
SN
2 e 4
· (ǫ
jet
3 e51
)
3/2
(1)
According to [11] a non-detection limits the number of
IceCube/ROTSE coincidences at a 90% confidence level
to
N
SN
IC/ROTSE
< 2.44
. Using Eq. 1 one obtains the
two-dimensional constraints on density and hadronic jet
energy for this model:
ρ
SN
2 e 4
(ǫ
jet
3 e51
)
3/2
< 2.44/N
SN,AB
IC/ROTSE
< 0.012,
(2)
Fig. 2: Sensitivity in the
E
jet
ρ
plane after one year
of operation of the 40 string IceCube detector (dashed
line—90% CL; solid line—one coincident detection per
year).
which is a reasonably good representation of the two
dimensional constraints for not too small densities
ρ
SN
2 e 4
>∼
10
3
. For GRB-like energies (
ǫ
jet
3 e51
= 1
), it
follows that at most one out of 80 SNe produces Ando &
Beacom-like jets in its core. Phrased in absolute terms, if
no SN will be detected, the rate of SNe with a mildly rel-
ativistic jet should not exceed
ρ = 3.1·10
6
Mpc
3
yr
1
(at 90% confidence level) in our program. The cut-off at
small densities visible in Fig. 2 is due to ROTSE-III’s
limiting magnitude. The sphere (i.e. effective volume)
within which ROTSE-III can detect Supernovae has a
radius of about 200-300 Mpc. ROTSE-III effectively
cannot probe SN subclasses that occur less then once
per year within this sphere.
B. Significance in case of a detection
Next we address the case that a SN was detected
in the follow-up observations. The task mainly consists
of computing the significance of the coincidence. We
compute this for one year of data and 25 alerts. Each
alert leads to the observation of a
∆Ω = 1.85
◦
×1.85
◦
=
3.4
square degree field, hence over the course of the
year ROTSE-III covers a fraction of the sky given
by
∆Ω/4π × N
alerts
= 2.1 · 10
3
. Next assume that
the time window for a coincident of an optical SN
detection and candidate neutrino multipet is given by
∆t
d
, the accuracy with which we can determine the
initial time of the supernova explosion. Studying the
lightcurve of supernova SN2008D, which has a known
explosion start-time given by an initial x-ray flash, we
have developed an accurate way to estimate
∆t
d
from
a SN lightcurve [2]. We fit the light curve data to a
model that postulates a phase of blackbody emission
followed by a phase dominated by pure expansion of the
luminous shell. Explosion times can be determined from
the lightcurve with an accuracy of less than 4 hours. A
detailed description of this method can be found in [2].
4
AUTHOR
et al.
PAPER SHORT TITLE
The number of accidental SNe found will be propor-
tional to
∆t
d
and the total number of SNe per year that
ROTSE-III would have sensitivity to detect, if surveying
the sky at all times,
N
ROTSE
≈ 10
4
. Putting all this
together the number of random coincidences is:
N
bg
= N
alerts
N
ROTSE
∆Ω
4π
×
∆t
d
yr
= 0.056
∆t
d
d
.
(3)
For
N
bg
≪ 1
this corresponds to the chance probability
p = 1 exp( N
bg
) ≈ N
bg
of observing at least one
random background event. For
∆t
d
= 1d
and no other
information, the observation of a SN in coincidence with
a neutrino signal would have a significance of about 2
σ
.
The significance can be improved by adding neutrino
timing information as well as the distance information
of the object found. We first discuss the extra timing
information. So far we have only required that two
neutrinos arrive within 100 s to produce an alert. Thus,
in the analysis presented above, the significance for two
events 1 s apart would be the same as for 99 s difference.
Since the probability
p
t
to find a time difference less
than
∆t
ν
due to a background fluctuation is given by
p
t
= ∆t
ν
/100s
assuming a uniform background, we in-
clude the time difference in the chance probability. Next
we discuss the use of the SN distance. One can safely
assume that there will be a strong preference for nearer
SNe, since these are most likely to lead to a neutrino
flux large enough to produce a multiplet in IceCube.
Using the distance
d
SN
as an additional parameter one
can compute the probability to observe a background SN
at a distance
d ≤ d
SN
. The probability is given by the
ratio of SNe observed by ROTSE-III within the sphere
d
SN
to all SNe:
p
d
= N
ROTSE
(d)/N
ROTSE
. In case of
a detection both
d
SN
and
∆t
ν
will be available. We use
a simple Monte Carlo to obtain the significance of this
detection. For example the detection of two neutrinos
with a temporal difference of
∆t
ν
=
10 s in coincidence
with a SN in
d
SN
=
20 Mpc distance has a p-value of
5 · 10
4
, which corresponds to
3.5σ
, assuming a total of
N
alerts
= 25
alerts found in the period of one year.
V. COINCIDENCES WITH GCN-GRBS
According to current models, about every 15-20th
GRB that can be detected by IceCube will produce
a neutrino doublet. Hence there is a small possibility
that we will find a doublet in coincidence with a GCN
alert, a case that we consider separately here. The
significance of such a coincidence can be estimated with
calculation analogous to Eq. 3. The number of accidental
coincidences with a time difference less than
∆t
is given
by:
N
bg
= N
alerts
N
GCN
∆Ω
4π
×
∆t
yr
= 3.2 · 10
8
∆t
1s,
.
(4)
where we have assumed 200 GCN notices and 30
multiplets a year. A coincidence occurs whenever the
neutrinos and the GRB overlap within predefined win-
dows in direction and time. For illustrative purposes, if
we choose a 1.5-degree directional window and a 4-
hour time window (corresponding roughly to IceCube’s
point spread function and to GRB observations and
modeling), Eq. 4 yields an expected background count
of
N
BG
= 4.7 · 10
4
. This corresponds to a
3.5σ
effect, or equivalently the expectation of a false positive
from background once every 2100 years. We can further
reduce the expected background by assuming that the
neutrino signal is most likely to be emitted at the same
time as the gamma rays. Since the background multiplets
will be distributed uniformly across the 4-hour window,
we can multiply the chance probability above by the
factor
p
t
=
?
?
?
?
t
GRB
t
ν
4
hours
?
?
?
?
(5)
where the absolute value is taken since we assume
the neutrinos are equally likely to be emitted before
the gamma-rays as they are after. Note that our flat
probability assumption for the relative emission times of
gamma rays and neutrinos from GRBs can, of course, be
modified to follow any particular theoretical model. With
all these assumptions, if we observe a coincidence that
is 300 seconds from the GRB onset time, the chance
probability is then given by
N
BG
· p
t
= 4.7 · 10
4
·
300/14400 = 9.8 · 10
6
, which corresponds to a
4.4σ
result.
VI. CONCLUSION
We have presented the setup and performance of
IceCube’s optical follow-up program, which was started
in October 2008. The program increases IceCube’s sen-
sitivity to transient sources such as SNe and GRBs and
furthermore allows the immediate identification of the
source. Non-detection of an optical counterpart allows
the calculation of a limit on model parameters such as
jet energy and density of SN accompanied by jets.
In addition multiplets of neutrinos are tested for coinci-
dences with GCN messages. Even a single coincidence
detection would be significant.
VII. ACKNOWLEDGMENTS
A. Franckowiak and M. Kowalski acknowledge the
support of the DFG. D. F. Cowen thanks the Deutscher
Akademischer Austausch Dienst (DAAD) Visiting Re-
searcher Program and the Fulbright Scholar Program.
REFERENCES
[1] A. Achterberg et al. [IceCube Collaboration], Astropart.Phys.
26:155-173, 2006
[2] D. F. Cowen, A. Franckowiak and M. Kowalski, arXiv:0901.4877
[3] C. W. Akerlof
et al.
, PASP 115:132-140, 2003
[4] M. Kowalski, A. Mohr, Astropart.Phys. 27:533-538,2007
[5] S. Ando, J. Beacom, Phys.Rev.Lett. 95:061103,2005
[6] S. Razzaque, P. Meszaros, E. Waxman, Phys.Rev.Lett.
93:181101, 2004
[7] S. Horiuchi, S. Ando, Phys.Rev. D77:063007,2008.
[8] F. Yuan, C. W. Akerlof, Astropart.Phys. 677:808-812,2008
[9] J. Becker, Phys.Rept. 458:173-246,2008
[10] F. Halzen, D. Hooper, Rept.Prog.Phys. 65:1025-1078, 2002
[11] G. J. Feldman, R. D. Cousins, Phys.Rev. D57:3873-3889,1998
[12] S. Ando, F. Beacom, H. Yuksel, Phys.Rev.Lett. 95:171101, 2005
[13] A. Achterberg et al. [IceCube Collaboration], APJ 674:357-370,
2007
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
Results and Prospects of Indirect Searches for Dark Matter with
IceCube
Carsten Rott
∗
and Gustav Wikstrom¨
†
for the IceCube collaboration
‡
.
∗
Dept. of Physics and Center for Cosmology and Astro-Particle Physics, Ohio State University, Columbus, OH 43210, USA
†
Oskar Klein Centre and Dept. of Physics, Stockholm University, SE-10691 Stockholm, Sweden
‡
See special section of these proceedings
Abstract. Dark matter could be indirectly detected
through the observation of neutrinos produced as
part of its self-annihilation process. Possible signa-
tures are an excess neutrino flux from the Sun, the
center of the Earth or from the galactic halo, where
dark matter could be gravitationally trapped. We
present a search for muon neutrinos from neutralino
annihilations in the Sun performed on IceCube data
collected with the 22-string configuration. No excess
over the expected atmospheric background has been
observed and upper limits at 90% confidence level
have been obtained on the annihilation rate and
converted to limits on WIMP-proton cross-sections,
for neutralino masses in the range of 250 GeV to
5 TeV. Further prospects for the detection of dark
matter from the Sun, the Earth, and the galactic halo
will be discussed.
Keywords: Dark Matter, Neutrinos
I. INTRODUCTION
The existence of dark matter can be inferred from
a number of observations, among them rotational pro-
files of galaxies, large scale structures, and WMAP’s
anisotropy measurement on the cosmic microwave back-
ground. Weakly Interacting Massive Particles (WIMPs),
‘cold’ thermal relics of the Big Bang, are leading dark
matter candidates. Besides overwhelming observational
evidence for its existence, the properties of dark matter
can only be understood through detection of direct or
indirect signals from its interactions or through the
production at collider experiments. In the Minimal Su-
persymmetric Standard Model (MSSM) the neutralino
is a promising WIMP particle. It is stable and can
annihilate pair-wise into Standard Model particles [1].
Galactic WIMPs could be gravitationally captured in
the Sun or the Earth and accumulated in their cores
[2]. Among the secondary products from the WIMP
annihilations we expect neutrinos, which could escape
from the center of the Sun or Earth and be detected in
neutrino telescopes. Neutrinos are also expected from
annihilations in the galactic halo. In IceCube [3] we
observe Cherenkov light from relativistic muons in ice.
The data analysis is focused on selecting upward-going
events in order to separate muons from neutrino in-
teractions from background muons created in cosmic-
ray air showers. In this paper we present a search
for a neutralino annihilation signal from the Sun with
the IceCube 22-string detector. Future sensitivities of
IceCube to this signal are discussed, as well as the
prospects of observing annihilation signals from the
Earth or the galactic halo.
II. THE ICECUBE NEUTRINO TELESCOPE
The IceCube Neutrino Telescope is a multipurpose
detector under construction at the South Pole, which
is currently about three quarter completed [3]. Upon
completion in 2011, IceCube will instrument a volume
of approximately one cubic kilometer of ice utilizing
86 strings, each instrumented with 60 Digital Opti-
cal Modules (DOMs). Eighty of these strings will be
arranged in a hexagonal pattern with an inter-string
spacing of about 125 m and with 17 m vertical separation
between DOMs, at a depth between 1450 m and 2450 m.
Complementing this 80 string baseline design will be
a deep and dense sub-array named DeepCore [4] that
will be formed out of seven regular IceCube strings
in the center of the array together with six additional
strings deployed in between them. In this way, the sub-
array will achieve an interstring-spacing of 72 m. The
six additional DeepCore strings will have a different
distribution of their 60 DOMs, optimizing their design
towards a lower energy threshold. The optical sensors
will have a vertical spacing of 7 m, will be deployed in
deep transparent ice
1
and will consist of high quantum
efficiency photomultiplier tubes (HQE PMTs). This will
enable us to study neutrinos at energies down to a few 10
GeV. DeepCore will be an extremely interesting detector
for the study of WIMPs.
III. ANALYSIS OF 22-STRING DATA
The 2007 dataset, consisting of 104.3 days livetime
with the Sun below the horizon recorded with the
IceCube 22-string detector, was searched for a neutrino
signal from the Sun [5]. The event sample was reduced
in steps from 4.8 · 10
9
to 6946 events at final level,
which constitutes the expected sample of atmospheric
1
The deep ice is clearer, with a scattering length roughly twice
that of the upper part of the IceCube detector. In addition, the deeper
location (below 2000 m) provides an improved shielding of cosmic
ray backgrounds.
2
C. ROTT et al. DARK MATTER IN ICECUBE
cos(
Ψ
)
0.99
0.992
0.994
0.996
0.998
1
Events
0
5
10
15
20
25
30
35
cos(
Ψ
)
0.99
0.992
0.994
0.996
0.998
1
Events
0
5
10
15
20
25
30
35
Data
Background
WIMP 1000 GeV, hard
Fig. 1. Cosine of the angle to the Sun, Ψ, for data (squares) with
one standard deviation error bars, and the atmospheric background
expectation (dashed line). Also shown is a simulated signal (m
χ˜
0
1
=
1000 GeV, hard spectrum) scaled to the found upper limit of µ
s
= 6.8
events.
neutrinos with a contamination of atmospheric muons.
Since the analysis is based on comparing the shape of
the angular distribution of signal and background (see
section III-B), there is no need to achieving a high purity
atmospheric neutrino sample at final cut level. Filtering
was based on log-likelihood muon track reconstructions,
geometry, and time evolution of the hit pattern. Events
were required to have a good quality track reconstruction
with a zenith angle in the interval 90
◦
to 120
◦
. Multi-
variate training and selection was done with the help of
Support Vector Machines [6]. At the final stages in the
analysis, randomized real data were used to model the
atmospheric background.
A. Simulations
Five WIMP masses: 250, 500, 1000, 3000, and 5000
GeV were simulated using WimpSim [7] in two an-
nihilation channels, bb (soft channel), and W
+
W
−
(hard channel), representing the extremes of the neu-
trino energy distributions. Single and coincident shower
atmospheric muon backgrounds were simulated using
CORSIKA [8]. The atmospheric neutrino background
was simulated [9] following the Bartol flux [10].
Charged particle propagation [11] and photon propa-
gation [12], using ice measurements [13], were also
simulated.
B. Results
The final data sample was used to test the hypothesis
that it contains a certain signal level, against the null
hypothesis of no signal. The shape of the angular distri-
bution of events with respect to the Sun was used as a
test statistic (see Figure 1). The background-only p.d.f.
was constructed from data with randomized azimuth
angles, while the p.d.f.s for the different signal models
tested were obtained from Monte Carlo. A limit was
set on the relative strength of the signal p.d.f. using
Neutralino mass (GeV)
10
2
10
3
10
4
10
)
-1
y
-2
Muon flux from the Sun (km
10
2
3
10
10
4
5
10
lim
CDMS(2008)+XENON10(2007)
SI
SI
<
σ
σ
lim
CDMS(2008)+XENON10(2007)
SI
SI
< 0.01xσ
σ
10
2
10
3
10
4
10
10
2
3
10
10
4
5
10
2
< 0.20
0.05 <
Ω
χ
h
BAKSAN 1978-1995
MACRO 1989-1998
SUPER-K 1996-2001
AMANDA-II 2001 (hard)
IceCube-22 2007 (soft)
IceCube-22 2007 (hard)
IceCube-80+DeepCore 1825d sens. (hard)
thr
= 1 GeV
Indirect searches - E
μ
Fig. 2. Upper limits at the 90% confidence level on the muon flux
from neutralino annihilations in the Sun for the soft (bb) and hard
(W
+
W
−
) annihilation channels, adjusted for systematic effects, as
a function of neutralino mass [5]. For neutralino masses below m
W
τ
+
τ
−
is used as the hard annihilation channel. The lighter [green] and
darker [blue] shaded areas represent MSSM models not disfavored by
direct searches [20], [21] based on σ
SI
and 100· σ
SI
, respectively. A
muon energy threshold of 1 GeV was used when calculating the flux.
Also shown are the limits from BAKSAN [15], MACRO [16], Super-
K [17], and AMANDA [18], and the expected sensitivity of IceCube
with DeepCore.
a Feldman-Cousins [14] confidence interval construc-
tion. These limits were transformed to a limit on the
muon flux above 1 GeV, which is shown in Figure 2
together with previous limits [15], [16], [17], [18],
MSSM models [19], and a conservative estimate of
the full IceCube sensitivity including DeepCore. The
models shown are those not excluded by CDMS [20] and
XENON10 [21] based on the spin-independent WIMP-
proton cross-section. Models in the darker region require
a factor of 100 increase in sensitivity of direct detection
experiments in order to be probed by them. Assuming
that WIMPs are in equilibrium in the Sun, the limit
on the muon flux can be converted to a limit on the
spin-dependent WIMP-proton cross-section [22]. These
limits are shown in Figure 3 together with previous
limits [17], [20], [23], [24], MSSM models, and the
IceCube sensitivity.
IV. E
ARTH
WIMP
S
Dark matter could also be gravitationally trapped at
the center of the Earth. Such scenarios are generally
not favored due to its less efficient capture of dark
matter. However, from an experimental point of view,
the searches from dark matter from the center of the
Earth are still of interest due to the many unknowns that
plague the relic, capture and annihilation processes that
enter into the calculation of the expected fluxes. In order
to search for an indirect signal from dark matter anni-
hilation from the Earth, IceCube uses muon neutrinos
ν
µ
that interact in or below the IceCube detector. They
produce vertically up-going track-like events, that point
back to the center of Earth. We have designed a string
trigger [25] for IceCube that is specifically optimized for
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
Neutralino mass (GeV)
10
2
10
3
10
4
10
)
2
Neutralino-proton SD cross-section (cm
10
-41
-40
10
-39
10
-38
10
-37
10
-36
10
-35
10
-34
10
-33
10
lim
CDMS(2008)+XENON10(2007)
SI
SI
<
σ
σ
lim
CDMS(2008)+XENON10(2007)
SI
SI
< 0.01xσ
σ
2
< 0.20
0.05 <
Ω
χ
h
CDMS (2008)
COUPP (2008)
KIMS (2007)
SUPER-K 1996-2001
IceCube-22 2007 (soft)
IceCube-22 2007 (hard)
IceCube-80+DeepCore 1825d sens. (hard)
10
2
10
3
10
4
10
10
-41
-40
10
-39
10
-38
10
-37
10
-36
10
-35
10
-34
10
-33
10
10
2
10
3
10
4
10
10
-41
-40
10
-39
10
-38
10
-37
10
-36
10
-35
10
-34
10
-33
10
Fig. 3.
Upper limits at the 90% confidence level on the spin-
dependent neutralino-proton cross-section σ
SD
for the soft (bb) and
hard (W
+
W
−
) annihilation channels, adjusted for systematic effects,
as a function of neutralino mass [5]. The lighter [green] and darker
[blue] shaded areas represents MSSM models not disfavored by direct
searches [20], [21] based on σ
SI
and 100 · σ
SI
, respectively. Also
shown are the limits from CDMS [20], COUPP [24], KIMS [23] and
Super-K [17], and the expected sensitivity of IceCube with DeepCore.
/(GeV))
primary
log10(E
1.5
2
2.5
3
3.5
4
Efficiency increase
1
10
Fig. 4.
Impact of the string trigger for the detection of vertically
up-going muon neutrinos as function of their energy. The efficiency
increase is shown compared to IceCube’s multiplicity eight DOM
trigger.
this class of events. A similar trigger has also been active
in AMANDA. The IceCube string trigger, which selects
events with a cluster of hits on a single string, has been
active since spring 2008. It requires 5 DOMs to be above
threshold in a series of 7 consecutive DOMs, within a
time window of 1.5 µs. Due to the low noise environ-
ment and this special trigger, IceCube has an energy
threshold for these vertical events that can reach below
100 GeV. The increase in efficiency to these events,
over the default DOM multiplicity trigger condition, is
shown in Figure 4. Based on selection criteria optimized
for these vertically up-going events [30], we will derive
a sensitivity for the detection of a possible additional
muon flux. These results will be shown at the time of
the conference.
Interpreting a possible muon flux (induced from muon
neutrino interactions in or below the IceCube detector)
from WIMP annihilation in the Earth is somewhat more
complicated compared to the solar WIMP searches. The
escape velocity is relatively small (v ? 15 km/s at
the center) and capture is only possible for low speed
WIMPs unless its mass is nearly identical to that of one
of the nuclear species in the Earth. WIMPs are typically
only expected to be captured after they are bound to
the solar system due to previous scattering in the Sun;
such capture mechanisms are described in [26], [27],
[28]. Contrarily to the Sun, capture and annihilation of
WIMPs are generally not in equilibrium in the Earth.
Hence, the expected flux of neutrinos from dark matter
annihilations strongly depends on how much dark matter
was previously accumulated. Models that enhance the
collection of dark matter by the Earth therefore also
significantly boost expected signals. One such example
is an expected boost due to lower velocity WIMPs in
the galactic halo from previous dwarf mergers. Such
scenarios could boost fluxes at neutrino telescopes by
a few orders of magnitude [29].
Such examples show that big uncertainties remain
in the overall flux predictions for neutrinos from the
center of the Earth. IceCube, with the combination of
DeepCore, is an ideal instrument to look for such signals.
V. HALO WIMPS
Besides searches for indirect signals from dark matter
annihilation in the center of the Sun and Earth, another
promising way is to look directly at the galactic halo.
Such a signal could be seen in neutrinos as a large scale
flux anisotropy that peaks towards the Galactic Center.
IceCube has in the past not performed a dedicated search
for such signals. However, theoretical predictions indi-
cate that such a search can provide stringent limits [31],
[32] on the dark matter self-annihilation cross-section.
They are complementary to Solar WIMP searches, as
they probe the dark matter self-annihilation cross-section
directly.
The analysis for a neutrino flux anisotropy is still on-
going on the IceCube dataset. We perform this analysis
on data collected with the IceCube 40-string configura-
tion. Neutrino-induced muon events are being used to
search for a neutrino flux anisotropy towards the direc-
tion of the Galactic Center. In its current configuration,
IceCube can only access up-going muon neutrinos for
the energy range of interest (around and below a TeV)
with sufficient background rejection. The region closest
towards the Galactic Center, accessible in IceCube with
up-going events, is therefore near the horizon. It covers,
in part, a distance of about 30
◦
towards the Galactic
Center. Using a declination band simplifies the back-
ground estimation, as an on and off-source comparison
can be performed. Second order effects need to be taken
4
C. ROTT et al. DARK MATTER IN ICECUBE
into account; these include uneven detector exposure-
times, as the reconstruction efficiency is a function of
the azimuth angle for the tracks in the same declination
band. We plan to present the sensitivity using this
method at the time of the conference for different signal
distributions within the declination bands [33].
For the future, DeepCore is especially promising for
the halo WIMP searches, as it lowers the neutrino energy
threshold, holds promises for cascade reconstruction and
will allow observation of the entire sky. The lower
energy threshold will increase expected signal rates,
especially to WIMPs with masses of a few hundred GeV
and scale with the increase in neutrino effective area.
Leading dark matter candidates have masses in the sub-
TeV range, so the expected neutrino energy spectrum
is at the low energy end of IceCube’s sensitivity. The
detection of low energy cascades caused by ν
τ
, ν
e
charged current interactions or neutral current of all
neutrino flavors is especially interesting, as the atmo-
spheric neutrino background to this signal is much lower
than the muon neutrino background. Even a very limited
angular resolution for these cascades, which IceCube
might be able to achieve, would benefit the analysis, as
it is looking for a large scale anisotropy. The usage of
surrounding IceCube strings as veto against down-going
muons in DeepCore is expected to give large reductions
in this background and enable us to study the entire
sky. Simple veto methods have achieved background
reductions of four orders of magnitude with excellent
signal retention and have potential for greater than 6
orders of magnitude rejection utilizing reconstruction
veto methods [34]. Since the Galactic Center, for which
the largest flux from dark matter annihilation is expected,
is located in the southern hemisphere, this will benefit
the analysis in particular.
Expected neutrino fluxes from dark matter self-
annihilations in the galactic halo are generally small.
Results from PAMELA [35] and Fermi [36] might indi-
cate larger than usual self-annihilation cross-sections of
the halo dark matter, this could either be due to unusually
large boost factors (clumpiness) well above expectations
from dark matter halo simulations, or due to an enhance-
ment in the self-annihilation cross-section (for example
Sommerfeld enhancement). Lepton results could also
be entirely explainable by astronomical sources (for
example pulsars [37]). Regardless of what the source
of the recent excess is, it only shows there remains a
large uncertainty in any flux predictions for neutrinos
from dark matter annihilations or decays in the galactic
halo. This, it will be important to check for any such
signals with neutrinos.
VI. SUMMARY
IceCube has set the best limits to date on WIMP
annihilation in the Sun using 22-string data from 2007.
Using data from the completed 86-string detector, which
will include the DeepCore low-energy extension, im-
provements of an order of magnitude are expected.
Searches for signals from the Earth and the galactic halo
are also expected to give interesting results.
REFERENCES
[1] G. Jungman, M. Kamionkowski and K. Griest, Phys. Rep. 267,
195 (1996).
[2] W. H. Press and D. N. Spergel, Astrophys. J 296, 679 (1985).
[3] A. Achterberg et al., Astropart. Phys. 26, 155 (2006).
[4] D. Cowen for the IceCube coll., Proc. of NEUTEL09, (2009).
[5] R. Abbasi et al., Phys. Rev. Lett. 102, 201302 (2009).
[6] S. S. Kerthi et al., Neural Comp. 13, 637 (2001).
[7] M. Blennow, J. Edsjo,¨ T. Ohlsson, JCAP 01, 021 (2008).
[8] D. Heck et al., FZKA Report 6019 (1998).
[9] A. Gazizov and M. Kowalski, Comput. Phys. Commun. 172, 203
(2005).
[10] G. D. Barr et al., Phys. Rev. D 70, 023006 (2004).
[11] D. Chirkin and W. Rhode, hep-ph/0407075v2.
[12] J. Lundberg et al., Nucl. Instr. Meth. A 581, 619 (2007).
[13] M. Ackermann et al., J. Geophys. Res. 111, 02201 (2006).
[14] G. J. Feldman and R. D. Cousins, Phys. Rev. D 57, 3873 (1998).
[15] M. M. Boliev et al., Nucl. Phys. Proc. Suppl. 48, 83 (1996).
[16] M. Ambrosio et al., Phys. Rev. D 60, 082002 (1999).
[17] S. Desai et al., Phys. Rev. D 70, 083523 (2004).
[18] M. Ackermann et al., Astropart. Phys. 24, 459 (2006).
[19] P. Gondolo et al., JCAP 0407, 008 (2004).
[20] Z. Ahmed et al., astro-ph/0802.3530.
[21] J. Angle et al., Phys. Rev. Lett. 100, 021303 (2008).
[22] G. Wikstrom¨ and J. Edsjo,¨ JCAP 04, 009 (2009).
[23] H. S. Lee et al., Phys. Rev. Lett. 99, 091301 (2007).
[24] E. Behnke et al., Science 319, 933 (2008).
[25] A. Gross et al. for the IceCube coll., astro-ph/0711.0353.
[26] J. Lundberg and J. Edsjo,¨ Phys. Rev. D 69, 123505 (2004).
[27] A. Gould, Astrophys. J. 328, 919 (1988).
[28] A. H. G. Peter, astro-ph/0902.1348.
[29] T. Bruch, A. H. G. Peter, J. Read, L. Baudis and G. Lake, astro-
ph/0902.4001.
[30] C. Rott, for the IceCube coll., these proceedings.
[31] H. Yuksel, S. Horiuchi, J. F. Beacom and S. Ando, astro-
ph/0707.0196.
[32] J. F. Beacom, N. F. Bell and G. D. Mack, Phys. Rev. Lett. 99,
231301 (2007).
[33] G. Mack, and C. Rott, forthcoming.
[34] D. Grant et al., for the IceCube coll., these proceedings.
[35] O. Adriani et al., Nature 458, 607 (2009).
[36] Fermi/LAT Collaboration, arXiv:0905.0025.
[37] H. Yuksel, M. D. Kistler and T. Stanev, astro-ph/0810.2784.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Search for the Kaluza-Klein Dark Matter with the
AMANDA/IceCube Detectors
Matthias Danninger
∗
and Kahae Han
†
for the IceCube Collaboration
‡
∗
Department of Physics, Stockholm University, AlbaNova, S-10691 Stockholm, Sweden
†
Department of Physics and Astronomy, University of Canterbury, Pr. Bag 4800 Christchurch, New Zealand
‡
See special section of these proceedings
Abstract
. A viable WIMP candidate, the lightest
Kaluza-Klein particle (LKP), is motivated by theories
of universal extra dimensions. LKPs can scatter off
nuclei in large celestial bodies, like the Sun, and
become trapped within their deep gravitational wells,
leading to high WIMP densities in the object’s core.
Pair-wise LKP annihilation could lead to a detectable
high energy neutrino flux from the center of the Sun
in the IceCube neutrino telescope.
We describe an ongoing search for Kaluza-Klein
solar WIMPs with the AMANDA-II data for the
years 2001-2003, and also present a UED dark matter
sensitivity projected to 180 days from a study of
data taken with the combined AMANDA II and
IceCube detector in the year 2007. A competitive
sensitivity, compared to existing direct and indirect
search experiments, on the spin-dependent cross
section of the LKP on protons is also presented.
Keywords
: Kaluza-Klein, Dark Matter, IceCube
I. INTRODUCTION
Kaluza-Klein weakly interacting massive particles
(WIMP) arising from theories with extra dimensions
have come under increased scrutiny [1] alongside WIMP
candidates from supersymmetric particle theories, e.g.
the neutralino.
Several analyses [2], [3] performed on the data from the
AMANDA-II and the IceCube detectors have already put
limits on the neutralino induced muon flux from the Sun
comparable to that of direct detection experiments. The
first excitation of the Kaluza Klein (KK) photon,
B
(1)
,
in the case of Universal Extra Dimensions (UED) with
one extra dimension, annihilates to all standard model
particles. This results in the production of a detectable
flux of muon neutrinos in the IceCube detector.
B
(1)
is often referred to as the LKP - lightest Kaluza-
Klein Particle. KK-momentum conservation leads to the
stability of the LKP, which makes it a viable dark matter
candidate. Compared to neutralino WIMPs, LKPs come
from a relatively simple extension of the Standard Model
and, consequently, branching ratios (see Table I) and
cross sections are calculated with fewer assumptions and
parameter-dependences. This feature allows us to per-
form a combined channel analysis for an LKP particle.
Another consequence of the simple UED model is that
with the assumption of a compactified extra dimension
TABLE I
POSSIBLE CHANNELS FOR THE PAIR ANNIHILATION OF
B
(1)
B
(1)
AND BRANCHING RATIOS OF THE FINAL STATES. FIGURES TAKEN
FROM [20].
Annihilation Process
Branching ratio
B
(1)
B
(1)
→ ν
e
ν
e
,
ν
µ
ν
µ
,
ν
τ
ν
τ
0.012
→ e
+
e
,
µ
+
µ
,
τ
+
τ
0.20
→ uu
,
cc
,
tt
0.11
→ dd
,
ss
,
bb
0.07
scale of around 1TeV, the particle takes a much narrower
range of masses [1] from the relic density calculation -
ranging from
600
GeV to
800
GeV and
500
GeV to
1500
GeV if coannihilations are accounted for [4]. Moreover,
collider search limits rule out LKP masses below
300
GeV [5], [6].
In this paper we describe an ongoing solar WIMP
analysis with the (2001) AMANDA data. Furthermore,
we derive for the combined geometry of
22
IceCube
strings (IC22) and AMANDA (to be referred to as the
combined analysis in the rest of the paper) the projected
sensitivity on the muon flux and spin-dependent (SD)
cross section obtained for LKP WIMPs with data from
the year 2007.
The AMANDA-II detector, a smaller predecessor of
IceCube with
677
OMs on
19
strings, ordered in a
500
m by
200
m diameter cylindrical lattice, has been
fully operational since 2001 [7]. The IceCube Detector,
with its
59
th
string deployed this season, is much larger
with increased spacing between the strings and will
have a total instrumented volume of
1
km
3
[8]. The set-
up in 2007 for the combined analysis consisted of
22
IceCube strings, and the
19
AMANDA strings, with
a separate trigger and data acquisition system. The
detector geometry for both AMANDA-II and IC22 is
shown in Fig. 2a.
II. SIMULATIONS
A solar WIMP analysis can be thought of as using
the Earth as its primary physical filter for data, as one
only looks at data collected when the Sun is below
the horizon at the South Pole,
Θ
⊙
∈ [90
◦
, 113
◦
]
. Sin-
gle
1
,
µ
single
, and coincident
2
,
µ
coin
, atmospheric muons
that come from cosmic ray showers in a zenith angle
1
atmospheric muons from single CR showers
2
atmospheric muons from coincident CR showers
2
M. DANNINGER
et al.
KALUZA-KLEIN DARK MATTER DETECTION IN AMANDA/ICECUBE
-200
-100
0
100
200
300
400
500
0
100
200
300
400
500
600
String Position North [m]
String Position East [m]
IC 22
AMANDA
Fig. 1. Top view of the 2007 IceCube+AMANDA detector configura-
tion. The IceCube-22 strings (squares) enclose the AMANDA strings
(circles).
range
Θ
µ
of
[0
◦
, 90
◦
]
, constitute the majority of the
background, whereas the near-isotropic distribution of
atmospheric neutrinos,
ν
atm
, will form an irreducible
background. The atmospheric muon backgrounds are
generated using CORSIKA [9] with the Ho¨randel CR
composition model [10]. For the atmospheric neutrino
background, produced according to the Bartol model
[11], ANIS [12] is used. For the combined analysis,
the simulated
µ
single
background has a detector-livetime
of
1.2
days,
µ
coin
of
7.1
days and
ν
atm
of
9.8
years.
WIMPSIM [13], [14] was used to generate the signal
samples for LKP WIMPs, consisting of
2
million events
per channel for WIMP masses varying from
250
GeV
to
3000
GeV. Individual annihilation channels (three
ν
’s,
τ
, and t,b,c quarks), contributing to
ν
µ
’s at the detector,
were generated for the combined analysis, as well as
for the AMANDA only analysis (in the latter case for
the energy range from
500
GeV to
1000
GeV). Muon
and Cˇ erenkov light propagation in Antarctic ice were
simulated using IceCube/AMANDA software such as
MMC [15], PTD and photonics [16]. Finally, AMASIM
for AMANDA and ICESIM for the combined detector
were used to simulate the detector response. The signal
detection efficiency of the two detector configurations is
given by the effective volume,
V
eff
, which is defined for
a constant generation volume,
V
gen
, by
V
eff
= V
gen
·
N
obs
N
gen
,
(1)
where
N
obs
is the number of observed LKP events and
N
gen
the number of generated LKP events, undergoing
charged-current interaction within
V
gen
.
V
eff
is a good
quantity to compare LKP detectability at trigger level
for the two analyses, shown in Fig. 2a.
After deadtime correction, 142.5 days of data when the
Sun was below the horizon were available in 2001 with
a total number of
1.46 · 10
9
recorded events for the
AMANDA analysis. The combined analysis is utilizing
a projected total livetime of
180
days of data for the
calculated sensitivities in this paper.
The main purpose of the Monte Carlo (MC) simulations
of the various background sources is to show that a good
agreement with experiment is achieved, demonstrating
a sufficient understanding of the detector. Thus, it
is viable to assume that the LKP signal samples are
simulated correctly within the AMANDA/IceCube
simulation-chain and can be used to select the different
cut parameters for the higher cut levels
L2, L3
and
L4
,
because their difference from background in different
parameter distributions can be clearly identified. The
actual cut value of each cut level is obtained by
maximizing the efficiency function, or a figure-of-
merit, for simulated LKP signals and the experimental
background sample, which consists of data taken when
the Sun was above the horizon and therefore contains
no solar WIMP signal. Setting cut values based on
experimental background datasets has the advantage
that possible simulation flaws are minimized.
III. FILTERING
LKP signals are point sources with very
distinct directional limitations (zenith angle theta,
Θ
zen
= 90
◦
± 23
◦
). Hence, the general strategy of
filtering for both analyses is to apply strict directional
cuts in early filter levels.
L0
and
L1
consist of
calibration, reconstruction and making a simple angular
cut of
Θ
zen
> 70
◦
on the first-guess reconstructed
track. This leads to a passing efficiency of around
0.7
for all LKP signal samples, and reduction of around
0.002
for both, data and muon background. All events
passing the
L0 + L1
level are reconstructed using
log-likelihood methods (llh).
L2
is a two dimensional
cut on the reconstructed llh-fit zenith angle (
Θ
zen,llh
)
within
Θ
⊙
and the estimated angular uncertainty of
the llh track.
L3
picks reconstructed tracks, which
are nearly horizontal and pass the detector, to further
minimize vertical tracks associated with background
events. The multivariate filter level,
L4
, consists of
two different multivariate analysis routines from the
TMVA [17] toolkit, namely a support vector machine
(SVM) together with a Gaussian fit-function and a
neural network (NN). The input variables for the
two algorithms are obtained by choosing parameters
with low correlation but high discrimination power
between background and signal. The individual output
parameters are combined in one multivariate cut
parameter
Q
NN
· Q
SVM
.
IV. SENSITIVITY
After the
L4
cut
3
, the muon background reduction is
better than a factor
1.16 · 10
7
, which implies that the
final sample is dominated by
ν
atm
background. The solar
3
starting with
L4
, only the combined analysis is discussed
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
LKP mass in [GeV ]
500
1000
1500
2000
2500
3000
]
3
effective Volume in [ km
-3
10
-2
10
-1
10
AMANDA trigger level
IceCube-22+AMANDA trigger level
IceCube-22+AMANDA final level
(a) Effective volume
LKP mass in [ GeV ]
500
1000
1500
2000
2500
3000
]
-1
year
-2
Muon Flux in [ km
3
10
4
10
Muon Flux IceCube-22+AMANDA(2007)180d sens.
(b) Muon flux sensitivity
Fig. 2. Fig.2a shows the effective volume as a function of LKP mass at trigger level and final cut level for the IceCube-22+AMANDA analysis
and at trigger level only for the AMANDA analysis. Fig.2b demonstrates the projected sensitivity to
180
days of livetime on the muon flux
from LKP annihilations in the Sun as a function of LKP mass for the IceCube-22+AMANDA detector configuration.
search looks for an excess in neutrino events over the
expected background in a specifically determined search
cone towards the direction of the Sun with an opening
angle
Ψ
. Events with a reconstructed track direction
pointing back towards the Sun within an angle
Ψ
are
kept, where
Ψ
is optimized to discriminate between the
ν
atm
background and a sum of all seven LKP channels,
weighted with the expected branching ratios as listed in
Table I.
The expected upper limit, or sensitivity, for an expected
number of background events
n
Bg
is
µ
90%
s
(n
Bg
) =
?
∞
n
obs=0
µ
90%
s
(n
obs
)
(n
Bg
)
n
obs
(n
obs
)!
e
n
Bg
,
(2)
where
µ
90%
s
(n
obs
)
is the Feldman-Cousins upper limit
for the number of observed events,
n
obs
[18]. The model
rejection factor [19]
MRF =
µ
90%
s
n
s
,
(3)
is used to determine the optimum opening angle
Ψ
of the solar search cone. Here,
n
s
is the number of
surviving LKP events within
Ψ
.
Under the assumption of no signal detection, it is
possible to derive the Feldman-Cousins sensitivity
discussed above for the combined detector with a total
projected livetime of
T
live
= 180
days. The expected
number of events after cut level
L4
are estimated from
a processed subset of observational data with a detector
livetime of
5.61
days. The results are then extrapolated
to the total livetime
T
live
, yielding an expectation of
7140
events. The corresponding expectation from the
simulated background samples,
n
Bg,MC
, normalized
to the data at filter level
L1
and extrapolated to
T
live
, is
633(µ
coin
) + 1038(µ
single
) + 5340(ν
atm
) =
7011(n
Bg,MC
)
.
The expected sensitivity on the neutrino-to-muon
conversion rate
90%
ν →µ
is given by
90%
ν →µ
=
µ
90%
s
V
eff
· T
live
,
(4)
where the effective volume
V
eff
is given by eq. 1. For
each annihilation channel, one can separately calculate
the
V
eff
within the solar search cone, determined by the
combined signal p.d.f.,
f
all
S
(x|Ψ)
, and thereby determine
a
90%
ν →µ
for each channel. Additionally, the combined
effective volume,
V
eff ,LKP
, for the expected
ν
LKP
spec-
trum is given by the sum of the individual
V
eff
per
channel, weighted with the respective branching ratio of
each channel. For the neutrino-to-muon conversion rate
per single channel, the expected limit on the annihilation
rate in the core of the Sun per second is given by,
90%
A
= (c
1
(ch, m
B
(1)
))
1
·
90%
ν →µ
,
(5)
where
c
1
(ch , m
B
(1)
)
is an LKP annihilation channel (
ch
)
and energy dependent constant. The sensitivity to the
muon flux at a plane at the combined detector is derived
via the calculation chain
90%
ν →µ
→
90%
A
→ Φ
90%
µ
and
is performed using the code described in [13], [14]. The
results for the final
V
eff
and the predicted sensitivity to a
muon flux resulting from LKP induced annihilations in
the Sun for the combined IC22 and AMANDA detector
2007 with a total livetime of
180
days are presented in
figures 2a and 2b. From the derived
ν
-to-
µ
conversion
rate,
90%
ν →µ,LKP
, we can calculate the sensitivity for the
annihilation rate in the Sun per second,
90%
A,LKP
.
4
M. DANNINGER
et al.
KALUZA-KLEIN DARK MATTER DETECTION IN AMANDA/ICECUBE
LKP mass (GeV)
2
10
3
10
4
10
)
2
LKP - proton SD cross-section (cm
-44
10
-43
10
-42
10
-41
10
-40
10
-39
10
-38
10
-37
10
-36
10
-35
10
-34
10
2
< 0.1161)
lim
precision data + WMAP (0.1037 <
Ω
h
SD
SD
<
σ
σ
IceCube-22+AMANDA (2007) 180d sens.
CDMS (2008)
COUPP (2008)
KIMS (2007)
Fig. 3. Theoretically predicted spin-dependent
B
(1)
-on-proton elastic scattering cross sections are indicated by the shaded area [22]. The cross-
section prediction vary with the assumed mass of the first KK excitation of the quark, constrained by
0.01 ≤ r = (m
q
(1)
m
B
(1)
)/m
B
(1)
≤
0.5
. The current ‘best’ limits, set by direct search experiments are plotted together with the sensitivity of the combined detector IceCube-
22+AMANDA. The region below
m
B
(1)
= 300
GeV is excluded by collider experiments [5], [6] and
m
B
(1)
> 1500
GeV is strongly
disfavored by WMAP observations [23].
In [20], it is shown that the equilibrium condition be-
tween
A,LKP
and the capture rate
C
⊙
is met by LKPs
within the probed mass range. Furthermore, the capture
rate of LKPs in the Sun is entirely dominated by the
spin-dependent component of the
B
(1)
-on-proton elastic
scattering [21]. Consequently, presuming an equilibrium
of
A,LKP
= C
⊙
, the sensitivity for the spin-dependent
elastic scattering cross section
4
of
B
(1)
can be calculated
as,
σ
H ,SD
≃ 0.597·10
24
pb
?
m
B
(1)
1TeV
?
2
·
90%
A ,LKP
s
1
?
.
(6)
The estimated sensitivity for the spin-dependent cross
section for LKPs is displayed in figure 3, along with the
most recently published limits from direct search ex-
periments. The theoretical spin-dependent cross section
predictions (shaded area) for LKPs are taken from [22]
and are plotted for different predictions for the mass of
the first KK-excitation of the quark.
V. CONCLUSION AND OUTLOOK
We showed that a competitive result on the spin-
dependent cross-section of LKP-on-proton scattering
can be obtained with the combined geometry of
AMANDA-II and IceCube-22, which explores parts
of the unrejected regions in the theoretically predicted
LKP-region.
We also described the ongoing solar WIMP analysis
4
The local density of DM in our galaxy is taken to match the mean
density
ρ
DM
= 0.3
GeV/c
2
cm
3
, and the rms velocity is set to
v =
270
km/s.
of the AMANDA-II data taken during 2001. This
will be extended to include 2002 and 2003 data.
Furthermore, as the energy signature of
ν
µ
’s induced
by LKP annihilations in the Sun is very hard, the
fullsized IceCube-80 detector will markedly improve
the sensitivity and set strong limits on LKP WIMP
theories.
REFERENCES
[1] D.Hooper
et al.
,
The PAMELA and ATIC Signals from Kaluza-
Klein Dark Matter
http://arxiv.org/abs/0902.0593.
[2] J.Braun
et al.
, (Searches for WIMP DM from the Sun with
AMANDA) ICRC (2009).
[3] R.Abbasi
et al.
, (the IceCube collaboration) arXiv:astro-
ph/0902.2460 (14 Feb 2009).
[4] T.Appelquist
et al.
,
Physical Review D
64
(2001) 035002.
[5] I.Gogoladze
et al.
,
Physical Review D
74
(2006) 093012.
[6] T.Appelquist
et al.
,
Physical Review D
67
(2003) 055002.
[7] E.Andres
et al.
, (the AMANDA collaboration)
Astrop. Phys.
13
(2000) 1.
[8] A.Achterberg
et al.
, (the IceCube collaboration)
Astrop. Phys.
26
(2006) 155.
[9] D.Heck
et al.
,
FZKA Report
6019
, Forschungszentrum Karlsruhe
(1998).
[10] J.H
o¨
randel,
Astrop. Phys.
19
(2003) 193.
[11] G.D.Barr
et al.
,
Phys. Review
D70
(2004) 023006.
[12] A.Gazizov
et al.
,
Comput.Phys.Commun.
172
(2005) 203.
[13] J.Edsj
o¨
,
WimpSim Neutrino Monte Carlo
http://www.physto.se/
∼
edsjo/wimpsim/.
[14] M.Blennow
et al.
,
JCAP
21
(2008).
[15] D.Chirkin,
Cosmic ray energy spectrum measurement with
AMANDA
PhD Thesis, UC Berkley (2003).
[16] J.Lundberg
et al.
,
Nucl.Instr.Meth.
A581
(2007).
[17] A.H
o¨
cker
et al.
, http://tmva.sourceforge.net.
[18] G.J.Feldmann
et al.
,
Phys. Rev.
D57
(1998) 7.
[19] G.C.Hill
et al.
,
Astrop. Phys.
19
(2003) 393.
[20] D.Hooper
et al.
, arXiv:hep-ph/0208261v3 (16 Feb 2003).
[21] D.Hooper
et al.
, arXiv:hep-ph/0409272v1 (23 Sep 2004).
[22] S.Arrenberg
et al.
, arXiv:hep-ph/0805.4210v1 (28 May 2008).
[23] M.Tegmark
et al.
,
Physical Review D
74
(2006) 123507.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Searches for WIMP Dark Matter from the Sun with AMANDA
James Braun
∗
and Daan Hubert
†
for the IceCube Collaboration
‡
∗
Dept. of Physics, University of Wisconsin, Madison, WI 53706, USA
†
Vrije Universiteit Brussel, Dienst ELEM, B-1050 Brussels, Belgium
‡
See the special section of these proceedings
Abstract
. A well-known potential dark matter sig-
nature is emission of GeV - TeV neutrinos from
annihilation of neutralinos gravitationally bound to
massive objects. We present results from recent
searches for high energy neutrino emission from
the Sun with AMANDA, in all cases revealing no
significant excess. We show limits on both neutralino-
induced muon flux from the Sun and neutralino-
nucleon cross section, comparing them with recent
IceCube results. Particularly, our limits on spin-
dependent cross section are much better than those
obtained in direct detection experiments, allowing
AMANDA and other neutrino telescopes to search a
complementary portion of MSSM parameter space.
Keywords
: AMANDA WIMP Neutralino
I. INTRODUCTION
Weakly interacting massive particles (WIMPs) with
electroweak scale masses are currently a favored expla-
nation of the missing mass in the universe. Such particles
must either be stable or have a lifetime comparable to
the age of the universe, and they would interact with
baryonic matter gravitationally and through weak inter-
actions. The minimal supersymmetric standard model
(MSSM) provides a natural candidate, the lightest neu-
tralino [1]. A large range of potential neutralino masses
exists, with a lower bound on the mass of the lightest
neutralino of 47 GeV imposed by accelerator-based
analyses [2], while predictions based on the inferred dark
matter density suggest masses up to several TeV [3].
Searches for neutralino dark matter include
direct
searches for nuclear recoils from weak interaction of
neutralinos with matter [4], [5] and
indirect
searches
for standard model particles produced by neutralino
annihilation. Particularly, a fraction of neutralinos inter-
acting with massive objects would become gravitation-
ally bound and accumulate in the center. If neutralinos
comprise dark matter, enough should accumulate and an-
nihilate to produce an observable neutrino flux. Searches
for a high energy neutrino beam from the center of
the Earth [6] and the Sun [7], [8], [9], [10], [11] have
yielded negative results. Observations of a cosmic ray
electron-positron excess by ATIC [12], PPB-BETS [13],
Fermi [14], and HESS [15], along with the anomalous
cosmic ray positron fraction reported by PAMELA [16],
could be interpreted as an indirect signal of dark matter
annihilation in our galaxy [17].
Here we present searches for a flux of GeV–TeV
neutrinos from the Sun using AMANDA. We improve on
the sensitivity of the previous AMANDA analysis [11]
significantly and extend the latest results from IceCube
[7] to lower neutralino masses. We observe no neutralino
annihilation signal and report limits on the neutrino-
induced muon flux from the Sun and the resulting limits
on neutralino-proton spin-dependent cross section.
II. NEUTRINO DETECTION WITH AMANDA
The detection of neutrino fluxes above
∼ 50
GeV is a
major goal of the Antarctic Muon And Neutrino Detector
Array (AMANDA). AMANDA consists of 677 optical
modules embedded 1500 m to 2000 m deep in the ice
sheet at the South Pole, arranged in 19 vertical strings
and occupying a volume of
∼ 0.02
km
3
. Each module
contains a 20 cm diameter photomultiplier tube (PMT)
optically coupled to an outer glass pressure sphere.
PMT pulses (“hits”) from incident Cherenkov light are
propagated to surface electronics and are recorded as
an event when 6–7 hits on any one string or 24 total
hits occur within 2.5
µ
s. The vast majority of the
O(
10
9
) events recorded each year are downgoing muons
produced by cosmic ray air showers in the atmosphere
above the South Pole. Relativistic charged leptons pro-
duced near the detector via charged-current neutrino
interactions similarly trigger the detector, with several
thousand atmospheric neutrino induced muon events
recorded per year. The hit leading edge times, along
with the known AMANDA geometry and ice properties
[18], allow reconstruction of muon tracks with median
accuracy 1.5
◦
– 2.5
◦
, dependent on zenith angle.
AMANDA operated in standalone from 2000–2006
and is currently a subdetector of the much larger (
∼
km
3
) IceCube Neutrino Observatory [19], scheduled
for completion in 2011. The optical module density of
AMANDA is much higher than that of IceCube, making
AMANDA more efficient for low-energy muons (
? 300
GeV) which emit less Cherenkov light.
III. DATA SELECTION AND METHODS
We describe two separate searches for Solar neu-
tralinos in this proceeding. First, we present a search
using a large data sample from 2000–2006 prepared
for a high energy extraterrestrial point source search
[20], [21]. We also present a search using data from
2001–2003, optimized to retain low energy events [22].
Both analyses are done in two stages; first, neutrino
2
BRAUN
et al.
AMANDA DARK MATTER SEARCHES
induced muon events are isolated from the much larger
background of downgoing muons, then a search method
is used to test for an excess at the location of the Sun.
A. Data Selection
While the Sun is above the horizon, neutrino-induced
muons from the Sun are masked by the much larger
background of downgoing cosmic ray muons; thus, we
select data during the period when the Sun is below the
horizon (Mar. 21 – Sept. 21), resulting in 953 days live-
time from 2000–2006 and 384 days from 2001–2003. In
both analyses, neutrino events are isolated by selecting
well reconstructed upgoing muon tracks. Events are first
reconstructed with fast pattern matching algorithms, and
events with zenith angles
θ < 80
◦
(
θ < 70
◦
for the
2001-2003 analysis) are discarded, eliminating the vast
majority of downgoing muons. The remaining events
are reconstructed with a more computationally intensive
maximum-likelihood reconstruction [23] accurate to 1.5
◦
– 2.5
◦
, and again events with
θ < 80
◦
are discarded.
O(
10
6
) misreconstructed downgoing muon events re-
main per year, and these are reduced by cuts on
track quality parameters such as track angular uncer-
tainty [24], the smoothness (evenness) of hits along
the track [23], and the likelihood difference between
the maximum-likelihood track and a forced downgoing
likelihood fit using the zenith distribution of downgoing
muons as a prior [23]. For the 2000–2006 analysis, 6595
events remain after quality cuts, dominantly atmospheric
neutrinos [20], reduced to 4665 events by requiring dates
when the Sun is below the horizon. Zenith distributions
from 2000–2006 are shown in figure 1.
We consider neutralino masses from 100 GeV to 5
TeV and two extreme annihilation channels:
W
+
W
(
τ
+
τ
for 50 GeV) and
b
¯
b
, which produce high and
low energy neutrino spectra, respectively, relative to the
neutralino mass. The fraction of signal events retained
depends on the neutrino energy spectrum and varies from
17% for a 5 TeV neutralino mass and
W
+
W
channel
to 1% for 100 GeV and
b
¯
b
channel, relative to trigger
level, in the 2000–2006 analysis.
The 2001–2003 analysis is a dedicated neutralino
search, unlike the 2000-2006 analysis, and more con-
sideration is given to low energy events. Twelve event
observables are considered, and selection criteria based
on these observables are optimized separately for three
signal classes, dependent on neutralino mass and anni-
hilation channel, to maximize retention of signal events.
The signal classes are shown below along with the
number of events passing selection criteria.
Class
Channel
m
χ
Final Events
A
W
+
W
250 GeV – 5 TeV
670
b
¯
b
500 GeV – 5 TeV
B
W
+
W
100 GeV
504
b
¯
b
250GeV
C
τ
+
τ
50 GeV
398
b
¯
b
50, 100 GeV
cos
θ
-1
-0.5
0
0.5
1
Events
1
10
10
2
3
10
10
4
10
5
6
10
10
7
8
10
9
10
Downgoing Muons
Misreconstructed Muons
Atmospheric Neutrinos
Filtered Data
Final Sample
Neutralino Signal
Fig. 1. Reconstructed zenith angles of data (circles) at trigger level,
filter level (
θ < 80
◦
), and final cut level, true (fine dotted) and
reconstructed (solid) zenith angles of CORSIKA [31] downgoing muon
simulation at trigger level, reconstructed zenith angles of ANIS [28]
atmospheric neutrino simulation at trigger level, filter level, and final
cut level (dashed), and reconstructed zenith angles of a neutrino signal
from the Sun at final cut level (dash-dotted).
The selection is more efficient than the 2000-2006
analysis, with 21% of signal retained for 5 TeV,
W
+
W
channel, to 4% for 100 GeV,
b
¯
b
channel. The 2001–
2003 analysis additionally considers 50 GeV neutralino
masses, with a signal efficiency of 1%–3%.
B. Search Method
Both analyses use maximum-likelihood methods [25]
to search for an excess of events near the location of
the Sun. The data is modeled as a mixture of
n
s
signal
events from the Sun and background events from both
atmospheric neutrinos and misreconstructed downgoing
muons. The signal likelihood for the i
th
event is
S
i
=
1
2πσ
2
i
e
ψ
2
i
2 σ
2
i
,
(1)
where
ψ
i
is the space angle difference between the event
and the Sun, and
σ
i
is the event angular uncertainty
[24]. The background likelihood
B
i
is obtained from
the zenith distribution of off-source data. The full-data
likelihood over all
N
data events is
L = P (Data|n
s
) =
?
N
i=1
?
n
s
N
S
i
+ (1
n
s
N
)B
i
?
(2)
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
and is numerically maximized to find the best fit event
excess
n
ˆ
s
. The likelihood ratio
2 log
L(0)
L(nˆ
s
)
is approxi-
mately
χ
2
distributed and provides a measure of signif-
icance. Event upper limits are set from this likelihood
using the Feldman-Cousins unified construction [26].
C. Signal Simulation and Systematic Uncertainties
Neutrino energy distributions at Earth from neutralino
annihilation in the Sun are generated by DarkSUSY
[27]. For the 2000–2006 analysis, neutrino events are
generated with ANIS [28], with muons propagated using
MMC [29], then reweighted to the energy distributions
described above. For 2001–2003, the DarkSUSY energy
distributions are sampled by WimpSimp [30], and muons
are propagated with MMC.
Uncertainties in our signal simulation are dominated
by uncertainties in optical module sensitivity and photon
propagation in ice. These uncertainties are constrained
by comparing the trigger rate of CORSIKA [31] down-
going muon simulation using various hadronic models
with the observed AMANDA trigger rate. The effect on
signal prediction is measured by shifting the simulated
optical module efficiency by these constraints and is
10% for
m
χ
= 5
TeV,
W
+
W
channel, to 21% for
m
χ
= 100
GeV,
b
¯
b
channel. Other sources of uncertainty
include event selection (4%–8%) and uncertainty in neu-
trino mixing angles (5%). For the 2000–2006 analysis,
uncertainties total 13%–24% and are included in the
limit calculation using the method of Conrad
et al.
[32]
as modified by Hill [33]. Uncertainties for 2001–2003
total 23%–38% and are included in the limits assuming
the worst case.
IV. RESULTS
The search methods are applied to the final data, and
both analyses reveal no significant excess of neutrino-
induced muons from the direction of the Sun. A Sun-
centered significance skymap from the 2000–2006 anal-
ysis (figure 2) shows a
0.8σ
deficit from the direction of
the Sun. For the 2001–2003 analysis, a deficit of events
is observed in classes A and C, and a small excess is
seen in class B. Each excess or deficit is within the 1
σ
range of background fluctuations.
Upper limits on the neutralino annihilation rate in the
Sun are calculated from the event upper limit
µ
90
by
A
=
4πR
2
µ
90
N
A
ρT
L
V
ef f
?
?
m
χ
0
σ
νN
dN
ν
dE
dE
?
1
,
(3)
where
R
is the Earth-Sun radius,
N
A
is the Avogadro
constant,
ρ
is the density of the detector medium,
T
L
is the livetime, and
σ
νN
is the neutrino-nucleon cross
section. The muon neutrino energy spectrum
dN
ν
dE
for a
given annihilation channel is obtained from DarkSUSY
and includes absorption and oscillation effects from tran-
sit through the Sun and to Earth. The energy-averaged
effective volume
V
ef f
is obtained from simulation. Lim-
its on muon flux are given by
Φ
µ
=
A
4πR
2
?
m
χ
1 GeV
dN
µ
dE
dE,
(4)
and limits on neutralino-proton cross section are calcu-
lated according to [34]. These quantities are tabulated in
table I for the more restrictive of the two analyses. Muon
flux limits, assuming a 1 GeV threshold on muon energy,
and spin-dependent cross section limits are shown in
figure 3 for both analyses.
V. DISCUSSION
These limits extend the latest IceCube limits to lower
neutralino masses and are now beginning to exclude
neutralino spin-dependent cross sections allowed by
direct detection experiments (figure 3). A 1000-fold
improvement over current direct-detection limits [4], [5]
does not significantly constrain allowed spin-dependent
cross sections; thus, neutrino telescopes will continue to
observe a complementary portion of MSSM parameter
space over the next several years. IceCube is currently
operating with 59 strings and will contain 86 strings
when complete in 2011. The DeepCore extension to
IceCube [35], six strings with tighter string spacing (72
m), tighter optical module spacing (7 m), and higher
PMT quantum efficiency, will be complete in 2010.
DeepCore will significantly enhance the sensitivity of
IceCube to low energy muons, extending the reach of
IceCube to lower neutralino masses.
REFERENCES
[1] M. Drees and M. M. Nojiri, Phys. Rev. D
47
, 376 (1993).
[2] C. Amsler
et al., Phys. Lett. B
667
, 1 (2008).
[3] R. C. Gilmore, Phys. Rev. D
76
, 043520 (2007).
[4] Z. Ahmed
et al.
, Phys. Rev. Lett.
102
, 011301 (2009).
[5] J. Angle
et al.
, Phys. Rev. Lett.
100
, 021303 (2008).
[6] A. Achterberg
et al., Astropart. Phys.
26
, 129 (2006).
[7] R. Abbasi
et al.
, arXiv:0902.2460.
[8] M. M. Boliev
et al.
, Nucl. Phys. Proc. Suppl.
48
, 83 (1996).
[9] M. Ambrosio
et al.
, Phys. Rev. D
60
, 082002 (1999).
[10] S. Desai
et al.
, Phys. Rev. D
70
, 083523 (2004).
[11] M. Ackermann
et al.
, Astropart. Phys.
26
, 155 (2006).
[12] J. Chang
et al., Nature
456
, 362 (2008).
[13] S. Torii
et al., arXiv:0809.0760 (2008).
[14] A. A. Abdo et al., Phys. Rev. Lett.
102
, 181101 (2009).
[15] F. Aharonian et al., arXiv:0905.0105.
[16] O. Adriani et al., Nature
458
, 607 (2009).
[17] L. Bergstro¨m, J. Edsjo¨, and G. Zaharijas, arXiv:0905.0333
(2009).
[18] M. Ackermann
et al., J. Geophys. Res.
111
, D13203 (2006).
[19] J. Ahrens
et al., Astropart. Phys.
20
, 507 (2004).
[20] R. Abbasi
et al.
, Phys. Rev. D
79
, 062001 (2009).
[21] J. Braun, PhD Thesis, University of Wisconsin (2009).
[22] D. Hubert, PhD Thesis, Vrije Universiteit Brussel (2009).
[23] J. Ahrens
et al., Nucl. Inst. Meth. A
524
, 169 (2004).
[24] T. Neunho¨ffer, Astropart. Phys.
25
, 220 (2006).
[25] J. Braun et al., Astropart. Phys.
29
, 299 (2008).
[26] G.J. Feldman and R.D. Cousins, Phys. Rev. D
57
, 3873 (1998).
[27] P. Gondolo
et al., JCAP
0407
, 008 (2004).
[28] A. Gazizov and M. Kowalski, Comput. Phys. Commun.
172
, 203
(2005).
[29] D. Chirkin and W. Rhode, arXiv:hep-ph/0407075 (2004).
[30] L. Bergstro¨m, J. Edsjo¨, and P. Gondolo, Phys. Rev. D
58
, 103519
(1998).
[31] D. Heck
et al., FZK, Tech. Rep. FZKA 6019 (1998).
[32] J. Conrad
et al., Phys. Rev. D
67
, 012002 (2003).
[33] G.C. Hill, Phys. Rev. D
67
, 118101 (2003).
[34] G. Wikstro¨m and J. Edsjo¨, JCAP
04
, 009 (2009).
[35] D. Cowen
et al., in
Proc. of NEUTEL09
, Venice, (2009).
[36] H. S. Lee
et al.
, Phys. Rev. Lett.
99
, 091301 (2007).
[37] E. Behnke
et al.
, Science
319
, 933 (2008).
4
BRAUN
et al.
AMANDA DARK MATTER SEARCHES
Fig. 2. Sun-centered skymap of event excesses from the 2000–2006 analysis.
m
χ
(GeV )
Channel
V
ef f
(m
3
)
µ
90
A
(s
1
)
Φ
µ
(km
2
y
1
)
σ
SI
(cm
2
)
σ
S D
(cm
2
)
50
τ
+
τ
4.31 × 10
3
6.2
2.11 × 10
25
1.21 × 10
5
1.84 × 10
40
4.80 × 10
38
b
¯
b
8.62 × 10
2
8.4
1.32 × 10
27
1.32 × 10
6
1.15 × 10
38
3.01 × 10
36
100
W
+
W
2.87 × 10
4
4.5
1.88 × 10
23
6.75 × 10
3
3.40 × 10
42
1.52 × 10
39
b
¯
b
8.65 × 10
3
4.5
1.42 × 10
25
4.94 × 10
4
2.56 × 10
40
1.14 × 10
37
200
W
+
W
3.42 × 10
5
4.0
9.81 × 10
21
1.09 × 10
3
4.23 × 10
43
2.98 × 10
40
b
¯
b
9.80 × 10
3
4.5
1.29 × 10
24
1.13 × 10
4
5.56 × 10
41
3.92 × 10
38
500
W
+
W
1.31 × 10
6
3.7
2.07 × 10
21
5.39 × 10
2
3.51 × 10
43
3.81 × 10
40
b
¯
b
8.87 × 10
4
4.0
8.52 × 10
22
2.12 × 10
3
1.45 × 10
41
1.57 × 10
38
1000
W
+
W
2.18 × 10
6
3.6
1.39 × 10
21
4.18 × 10
2
7.82 × 10
43
1.01 × 10
39
b
¯
b
2.14 × 10
5
4.0
2.89 × 10
22
1.26 × 10
3
1.63 × 10
41
2.10 × 10
38
2000
W
+
W
2.38 × 10
6
3.6
1.56 × 10
21
3.90 × 10
2
3.19 × 10
42
4.52 × 10
39
b
¯
b
3.53 × 10
5
3.9
1.46 × 10
22
9.10 × 10
2
2.98 × 10
41
4.23 × 10
38
5000
W
+
W
2.07 × 10
6
3.6
2.20 × 10
21
3.94 × 10
2
2.66 × 10
41
3.97 × 10
38
b
¯
b
4.59 × 10
5
3.7
8.91 × 10
21
7.17 × 10
2
1.08 × 10
40
1.61 × 10
37
TABLE I
EFFECTIVE VOLUME, EVENT UPPER LIMIT, AND PRELIMINARY LIMITS ON NEUTRALINO ANNIHILATION RATE IN THE SUN,
NEUTRINO-INDUCED MUON FLUX FROM THE SUN, AND SPIN-INDEPENDENT AND SPIN-DEPENDENT NEUTRALINO-PROTON CROSS SECTION
FOR A RANGE OF NEUTRALINO MASSES, INCLUDING SYSTEMATICS.
Neutralino mass m
χ
(GeV)
Preliminary
10
2
10
3
10
4
10
)
-1
y
-2
Muon flux from the Sun (km
10
2
3
10
10
4
5
10
6
10
10
2
10
3
10
4
10
10
2
3
10
10
4
5
10
6
10
lim
CDMS(2008)+XENON10(2007)
SI
SI
<
σ
σ
lim
CDMS(2008)+XENON10(2007)
SI
SI
< 0.001
σ
σ
2
< 0.20
0.05 <
Ω
χ
h
BAKSAN 1978-1995
MACRO 1989-1998
SUPER-K 1996-2001
IceCube-22 2007 (soft)
IceCube-22 2007 (hard)
IceCube-80+DeepCore 1800d sens. (hard)
AMANDA 2001-2003 (soft)
AMANDA 2001-2003 (hard)
AMANDA 2000-2006 (soft)
AMANDA 2000-2006 (hard)
thr
= 1 GeV
Indirect searches - E
μ
Neutralino mass m
χ
(GeV)
Preliminary
10
2
10
3
10
4
10
)
2
(cm
SD
σ
Neutralino-proton SD cross-section
10
-41
-40
10
-39
10
-38
10
-37
10
-36
10
-35
10
-34
10
-33
10
-32
10
-31
10
lim
CDMS(2008)+XENON10(2007)
SI
SI
<
σ
σ
lim
CDMS(2008)+XENON10(2007)
SI
SI
< 0.001
σ
σ
2
< 0.20
0.05 <
Ω
χ
h
CDMS (2008)
COUPP (2008)
KIMS (2007)
SUPER-K 1996-2001
IceCube-22 2007 (soft)
IceCube-22 2007 (hard)
IceCube-80+DeepCore 1800d sens. (hard)
AMANDA 2001-2003 (soft)
AMANDA 2001-2003 (hard)
AMANDA 2000-2006 (soft)
AMANDA 2000-2006 (hard)
10
2
10
3
10
4
10
10
-41
-40
10
-39
10
-38
10
-37
10
-36
10
-35
10
-34
10
-33
10
-32
10
-31
10
Fig. 3. Preliminary limits on neutrino-induced muon flux from the Sun (left) along with limits from IceCube [7], BAKSAN [8], MACRO [9],
and Super-K [10], and limits on spin-dependent neutralino-proton cross section (right) along with limits from CDMS [4], IceCube [7], Super-K
[10], KIMS [36], and COUPP [37]. The green shaded area represents models from a scan of MSSM parameter space not excluded by the
spin-independent cross section limits of CDMS [4] and XENON [5], and the blue shaded area represents allowed models if spin-independent
limits are tightened by a factor of 1000. The projected sensitivity of 10 years operation of IceCube with DeepCore is shown in both figures.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
The extremely high energy neutrino search with IceCube
Keiichi Mase
∗
, Aya Ishihara
∗
and Shigeru Yoshida
∗
for the IceCube Collaboration
†
∗
The department of Physics, Chiba University, Yayoi-tyo 1-33, Inage-ku, Chiba city, Chiba 263-8522, Japan
†
See the special section of these proceedings
Abstract
. A search for extremely high energy (EHE)
cosmogenic neutrinos has been performed with Ice-
Cube. An understanding of high-energy atmospheric
muon backgrounds that have a large uncertainty is
the key for this search. We constructed an empirical
high-energy background model. Extensive compar-
isons of the empirical model with the observational
data in the background dominated region were
performed, and the empirical model describes the
observed atmospheric muon backgrounds properly.
We report the results based on the data collected
in 2007 with the 22 string configuration of IceCube.
Since no event was found after the search for the
EHE neutrinos, a preliminary upper limit on an
E
2
flux of
E
2
φ
ν
e
+ν
µ
+ν
τ
≤ 5.6 × 10
7
GeV cm
2
s
1
sr
1
(90% C.L.) is placed in the energy range
10
7.5
< E
ν
< 10
10.6
GeV.
Keywords
: neutrinos, IceCube, extremely high en-
ergy
I. INTRODUCTION
Extremely high energy cosmic-rays (EHECRs) with
energies above
10
11
GeV are observed by several ex-
periments. Although there is an indication that EHECRs
are associated with the matter profile of the universe
[1], their origin is still unknown. The detection of cos-
mogenic EHE neutrino signals with energies greater than
10
7
GeV can shed light on their origin. The cosmogenic
neutrinos [2] produced by the GZK mechanism [3]
carry information on the EHECR source evolution and
the maximum energy of EHECRs at their production
site [4]. Thus, EHE neutrinos can provide fundamental
information about how and where the EHECRs are
produced.
The detection of EHE neutrinos has been an exper-
imental challenge because the very small intensities of
expected EHE neutrino fluxes require a huge effective
detection volume. The IceCube neutrino observatory,
currently under construction at the geographic South
Pole, provides a rare opportunity to overcome this diffi-
culty with a large instrumental volume of 1 km
3
.
The backgrounds for the EHE neutrino signals are
atmospheric muons. The large amount of atmospheric
muons come vertically, while the signal comes primar-
ily from zenith angles close to the horizon, reflecting
competitive processes of generation of energetic sec-
ondary leptons reachable to a detector and absorption of
neutrinos due to an increase of the cross-sections. The
atmospheric muon backgrounds drop off rapidly with
increasing energy. Therefore, a possible EHE neutrino
flux will exceed the background in the EHE region (∼
>
10
8
GeV). The signal is separated from the backgrounds
by using angle and energy information.
II. THE EHE EVENTS AND THE ICECUBE DETECTOR
At extremely high energies, neutrinos are mainly
detected via secondary muons and taus induced during
the propagation of EHE neutrinos in the earth [5]. These
particles are seen in the detector as a series of energetic
cascades from radiative energy loss processes such as
pair creation, bremsstrahlung and photonuclear interac-
tions rather than as minimum ionizing particles. These
radiative energy losses are approximately proportional
to the energies of the muon and tau, making it possible
to estimate its energy by observing the energy deposit
in the detector.
The Cherenkov light from the particles generated
through the radiative processes are observed by an
array of Digital Optical Modules (DOMs) which digitize
the charges amplified by the enclosed 10” Hamamatsu
photomultiplier tubes (PMTs) with a gain of
∼ 10
7
. The
total number of photo-electrons (NPE) detected by all
DOMs is used to estimate the energy of particles in this
analysis. It is found that NPE is a robust parameter for
estimating the particle energy.
The data used in this analysis were taken with the
22 string configuration of IceCube (IC22). Each string
consists of 60 DOMs and 1320 DOMs in total with 22
strings. The data taking began May, 2007, and continued
to April, 2008. This analysis used a specific filtered data
to select high energy events, which requires a minimum
number of 80 triggered DOMs. The total livetime is
242.1 days after removing data taken with unstable
operation. The event rate at this stage is
∼1
.5 Hz with
a 16% yearly variation. Then, 6516 events with NPE
greater than
10
4
(corresponding to CR primary energy
of about
10
7
GeV and neutrino energy of about
10
6
GeV
(with
E
2
flux)) are selected and used for the further
analysis.
III. BACKGROUND MODELING
A. Construction of the empirical model
Bundles of muons produced in CR air showers are the
major background for the EHE signal search. Multiple
muon tracks with a small geometrical separation resem-
ble a single high energy muon for the IceCube detector.
An understanding of the high energy atmospheric muon
backgrounds is essential for the EHE signal search.
2
K. MASE
et al.
EHE NEUTRINO SEARCH WITH ICECUBE
However, the backgrounds at the relevant energy range
(
> 10
7
GeV) is highly uncertain because of the poorly
characterized hadronic interactions and composition of
the primary CR where no direct measurement is avail-
able.
Therefore, we constructed an empirical model based
on the Elbert model [6], optimizing the model to match
the observational data reasonably in the background
dominant energy region (
10
4
<
NPE
< 10
5
). The model
is then extrapolated to higher energies to estimate the
background in the EHE signal region. (See Fig. 1)
The original Elbert model gives a number of muons
for a CR primary energy
E
0
such as
N
µ
=
E
T
E
0
A
2
cos θ
′
?
AE
µ
E
0
?
α
?
1
AE
µ
E
0
?
β
,
(1)
where
A
is the mass number of primary CRs with energy
of
E
0
, and
θ
′
is the zenith angle of a muon bundle.
α
,
β
and
E
T
are empirical parameters. The energy weighted
integration of the formula relates the total energy carried
by a muon bundle
E
µ
B ,surf
to the primary CR energy
E
0
as,
E
µ
B ,surf
≡
?
E
0
/A
E
surf
th
dN
µ
dE
µ
E
µ
dE
µ
≃ E
T
A
cos θ
′
α
α 1
AE
surf
th
E
0
?
α+1
(
,
2)
where
E
surf
th
is a threshold energy of muons contributing
to a bundle at surface and depends on the zenith angle.
A surface threshold is related to a threshold energy at
the IceCube depth
E
in ice
th
, by assuming a proportional
energy loss to the bundle energy during propagation.
This threshold at the IceCube depth is independent of
zenith angle.
With help of a Monte-Carlo (MC) simulation for the
detector response as well as the measured CR flux, it is
possible to predict the NPE distribution for certain
α
and
E
in ice
th
parameters. The CR flux used in this analysis
is taken from the compilation of several experimental
observations in Ref. [7]. The detector response includ-
ing the Cherenkov photon emission, the propagation in
the detector volume and the PMT/DOM response is
simulated with the IceCube simulation program. The
α
and
E
in ice
th
parameters are, then, optimized to express
the observed NPE distributions. The best optimized
parameters are derived as
α = 1.97
and
E
in ice
th
= 1500
GeV.
With this empirical model, a simple simulation is
feasible rather than simulating all muon tracks in a
bundle, where the multiplicity can reach ten thousand for
CR primary energies of
10
11
GeV. Therefore, a bundle
is replaced by a single track with the same energy as
the entire bundle. It is shown in the next section that
this substitution works well to express the observational
data.
Data generated with CORSIKA [8] (with the SIBYLL
high energy hadronic interaction model) are also used.
However, the extensive resources required for MC gen-
eration precludes production of MC data with energy
above
10
10
GeV. Therefore, the CORSIKA data are
mainly used to confirm the empirical model in the back-
ground dominant energy region and provide redundant
tools to study systematic uncertainty on the background
estimation.
The relation between CR primary energy and the NPE
(which is the empirical model itself) is independently
verified by using information from coincident events
with the in-ice and surface detectors. The surface de-
tectors can estimate the CR primary energy and the
in-ice detectors give NPE. The relation is found to be
consistent with the empirical model we derived.
B. Comparison between observational data and MC
An extensive comparison between the empirical
model and the observational data was performed. The
empirical model is found to describe the observational
data reasonably in most cases. However, a significant
difference was found in the
z
position (depth) of the
center of gravity of the event (CoGZ) distribution. Many
events are found in the deep part of the detector for the
empirical model, while the events concentrate more at
the top for the observational data. The difference is only
seen for the vertical muons. This is probably due to the
simple single muon substitution for the muon bundles in
the empirical model. The more energetic single muons
penetrate into the deep part, while many low energy
muons in the bundles lose energies at the top of the
detector for the vertical case. However, for the inclined
cases, the bundles are already attenuated before coming
to the detector, giving reasonable agreement between the
observational data and the empirical model. Therefore,
vertical events whose reconstructed zenith angles are less
than 37
◦
are not used in this analysis. A simple algorithm
is used for the angle reconstruction, based on the time
sequence of the first pulses recorded by DOMs.
Several distributions for the observational data and
MC data after removing the vertical events are shown
in Fig. 1 as well as the expected GZK cosmogenic
neutrino signal [4]. As seen in the figure, the empirical
model describes the observational data reasonably. The
observed CoGZ distribution is also well represented by
the empirical model after removing vertical events. The
observed data are bracketed by the pure CORSIKA
(SIBYLL) proton and iron simulation as expected.
Some up-going events are seen in the observational
data, though this is consistent with the empirical back-
ground model. It is found that they are horizontally mis-
reconstructed. On the other hand, fewer horizontal events
are found for the CORSIKA data sets. This is because
the CORSIKA data exhibit a better angular resolution
of 1.4
◦
(one sigma) compared to the empirical model of
2.5
◦
. The angular resolution for the observational data
is estimated with help of the IceTop geometrical recon-
struction. The estimated resolution is 2.5
◦
and consistent
with the one of the empirical model. Another difference
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
(total Npe)
10
log
4
4.5
5
5.5
6
6.5
Event number in 242 days
-3
10
-2
10
-1
10
1
10
2
10
3
10
Total
Total Npe
Npe distributiondistribution
cos(zenith angle)
-1 -0.8 -0.6 -0.4 -0.2 -0 0.2 0.4 0.6 0.8
Event number in 242 days
-3
10
-2
10
-1
10
1
10
2
10
3
10
Zenith angle distribution
CoGZ [m]
-400
-200
0
200
400
600
Event number in 242 days
-2
10
-1
10
1
10
2
10
3
10
CoGZ distribution
Fig. 1. The total NPE, zenith angle and CoGZ distributions between observational and MC data. The black dots represents observational data,
green lines for empirical model (The shade expresses the uncertainty of the model), red for proton (CORSIKA, SIBYLL) and magenta for iron
(CORSIKA, SIBYLL). The expected signal from GZK neutrinos[4] is also plotted with blue lines.
between the observational data and the CORSIKA data
is found in the CoGZ distribution. The CORSIKA data
concentrate more at the top of the detector especially
for vertical events. The CORSIKA data also show a
narrower distribution in the relation of CR primary
energy and the NPE. All these facts seem to indicate that
the bundles in CORSIKA consist of more lower energy
muon tracks compared to the observational data, leading
to bundles with less stochastic energy losses. In order to
confirm this hypothesis, more specific investigation is
needed.
The GZK signal events populate the EHE region
and tend to be horizontal, as described in a previous
section. This allows one to discriminate them from the
background. The signal is also concentrated in the deep
part of the detector because of the more transparent ice
there.
IV. SEARCH FOR EHE NEUTRINO SIGNAL
Using the empirical background model, the EHE sig-
nal search was performed based on the NPE and zenith
angle information. The selection criteria are determined
by using only MC data sets that are optimized with the
observational data in the background dominated energy
region (
10
4
≤
NPE
≤ 10
5
), following a blind analysis
procedure.
It is found that the large spread of mis-reconstructed
events extended to the signal region. We found that
the angular resolution is related to the CoGZ position.
Events whose CoGZs are at the bottom of the detector
(CoGZ
<
250 m) and which pass through the edge
or outside of the bottom detector are significantly mis-
reconstructed horizontal. When an inclined track reaches
at the edge of the bottom part of the detector, there is
no more detector below, so that the hit timing pattern
resembles a horizontal track. The very clean ice at the
bottom part of the detector and the biggest dust layer at
middle enhance this effect. Therefore, the data sample
is divided into two by the CoGZ position as follows.
region A:
250
<
CoGZ
<
50 m, and CoGZ
>
50 m
region B:
CoGZ
<
250 m, and
50
<
CoGZ
<
50 m
A clear difference between the backgrounds and the
signal is seen in the zenith angle and total NPE relations
as shown in Fig. 2. The atmospheric background muon
distribution shows a steep fall in NPE and peaks in the
vertical direction, while the GZK signal is mainly hor-
izontal and at higher NPE, allowing the discrimination
of the backgrounds by rejecting low NPE events and
vertically reconstructed events. It is also obvious that
the large spread in zenith angle direction for region B
due to mis-reconstructed events.
The selection criteria to separate signal from back-
ground are determined for region A and B separately.
The criteria are determined at first for each zenith angle
bins, requiring the background level to be negligible
compared to the signal (
10
4
events per 0.1 cos(zenith
angle) bin per 242.1 days). After the optimization for
each zenith angle bin, the determined cut-offs in NPE
are connected with contiguous lines as shown in Fig. 2.
The expected numbers of signal and background
events with the selection criteria are summarized in
Table I.
TABLE I
EXPECTED EVENT NUMBER
Models
Expected events in 242.1 days
GZK1 [4]
0.16
±
0.00 (stat.)
+0.03
0.05
(sys.)
Atm. muon
(6.3
±
1.4 (stat.)
+6.4
3.9
(sys.))
×10
4
The effective area for each neutrino flavor averaged
over all solid angles with the selection criteria is shown
in Fig. 3.
V. RESULTS
The EHE neutrinos are searched for by applying the
selection criteria determined in the previous section to
the 242.1 days of observed data taken in 2007.
Since no event is found after the search, a 90 %
C.L. upper limit for all neutrino flavors (assuming full
mixing neutrino oscillations) is placed with the quasi-
differential method based on the flux per energy decade
(
∆ log
10
E
= 1.0) described in Ref. [9]. A 90 % C.L.
preliminary upper limit for an
E
2
spectrum is also
derived as
E
2
φ
ν
e
+ν
µ
+ν
τ
≤ 5.6 × 10
7
GeV cm
2
s
1
4
K. MASE
et al.
EHE NEUTRINO SEARCH WITH ICECUBE
log (total Npe)
4
4.5
5
5.5
6
6.5
cos(zenith angle)
-1
-0.8
-0.6
-0.4
-0.2
-0
0.2
0.4
0.6
0.8
1
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
1
10
2
10
3
10
4
10
Obs. data
region A
log (total Npe)
4
4.5
5
5.5
6
6.5
cos(zenith angle)
-1
-0.8
-0.6
-0.4
-0.2
-0
0.2
0.4
0.6
0.8
1
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
1
10
2
10
3
10
4
10
atmospheric muons
region A
(total Npe)
10
log
4
4.5
5
5.5
6
6.5
cos(zenith angle)
-1
-0.8
-0.6
-0.4
-0.2
-0
0.2
0.4
0.6
0.8
1
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
1
10
2
10
3
10
4
10
GZK signals
region A
log (total Npe)
4
4.5
5
5.5
6
6.5
cos(zenith angle)
-1
-0.8
-0.6
-0.4
-0.2
-0
0.2
0.4
0.6
0.8
1
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
1
10
2
10
3
10
4
10
Obs. data
region B
log (total Npe)
4
4.5
5
5.5
6
6.5
cos(zenith angle)
-1
-0.8
-0.6
-0.4
-0.2
-0
0.2
0.4
0.6
0.8
1
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
1
10
2
10
3
10
4
10
atmospheric muons
region B
log (total Npe)
4
4.5
5
5.5
6
6.5
cos(zenith angle)
-1
-0.8
-0.6
-0.4
-0.2
-0
0.2
0.4
0.6
0.8
1
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
1
10
2
10
3
10
4
10
GZK signals
region B
Fig. 2. The zenith angle Vs total NPE. The top plots are for region A and the bottom ones for region B. The plots are for the observational
data, the background from the empirical model and the GZK signal[4] from left.
10
(Energy/GeV)
log
6 6.5 7 7.5 8 8.5 9 9.5 10 10.5
]
2
Effective area [m
1
10
10
2
3
10
10
4
ν
e
ν
μ
ν
τ
Fig. 3. The effective area for each flavor neutrino after applying the
signal selection criteria averaged over all solid angles. Blue dotted line
represents
ν
e
, black solid line for
ν
µ
and red dashed line for
ν
τ
.
sr
1
, where 90 % of the events are in the energy range
of
10
7.5
< E
ν
< 10
10.6
GeV, taking the systematics into
account. These preliminary limits as well as results of
several model tests are shown in Fig. 4. The derived limit
is comparable to the Auger [13] and HiRes [16] limit.
The AMANDA limit [12] for an
E
2
flux is better than
the limit by this analysis. This is because AMANDA has
a better sensitivity for lower energy and the livetime is
about twice as much as this analysis.
The systematics such as detector sensitivity, neu-
trino cross-section, hadronic interaction model, yearly
variation are currently being investigated. The biggest
uncertainty comes from the NPE difference observed by
the absolutely calibrated light source in situ, and it is
estimated to be on the order of 30 %. These systematics
are included in the upper limit calculation. The details
of the systematics estimation as well as more detail
of this analysis will be presented in another paper in
preparation.
ACKNOWLEDGEMENTS
We acknowledge the U. S. National Science Founda-
tion, and all the agencies to support the IceCube project.
This analysis work is particularly supported by the Japan
Society for the Promotion of Science.
10
(Energy) [GeV]
log
6.5
7
7.5
8
8.5
9
9.5
10 10.5
11
]
-1
sr
-1
s
-2
dF/dE) [GeV cm
2
(E
10
log
-9
-8.5
-8
-7.5
-7
-6.5
-6
-5.5
-5
-4.5
-4
IC22
Z-burst
GZK 1
GZK 2
GZK 3
Auger
ANITA
-2
)
Auger (E
HiRes
-2
)
RICE (E
-2
)
AMANDA (E
-2
)
IC22 (E
IceCube preliminary
Fig. 4. The preliminary upper limit by the IC-22 EHE analysis (red
solid) (all flavor) with the systematics taken into account. The thick
long dashed green line represents GZK model 1 [4], light blue is for
GZK model 2 [10], blue line is for GZK model 3 [17] and yellow for
Z-burst model [11]. The dotted green line is 90 % C.L. upper limit for
GZK model 1 by this analysis. The upper limit by other experiments
are also shown with dashed lines [12], [13], [14], [15], [16]. Limits
from other experiments are converted to all flavors where necessary
assuming full mixing neutrino oscillations.
REFERENCES
[1] The Pierre Auger Collaboration, Science
318
, 896 (2007).
[2] V. S. Beresinsky and G. T. Zatsepin, Phys. Lett.
28B
, 423 (1969).
[3] K. Greisen, Phys. Rev. Lett.
16
, 748 (1966); G. T. Zatsepin and
V. A. Kuzmin, Pisma Zh. Eksp. Teor. Fiz.
4
, 114 (1966) [JETP.
Lett.
4
, 78 (1966)].
[4] S. Yoshida and M. Teshima, Prog. Theor. Phys.
89
, 833 (1993).
[5] S. Yoshida
et al.
, Phys. Rev. D
69
, 103004 (2004).
[6] T. K. Gaisser,
Cosmic Rays and Particle Physics
(Cambridge
University Press, Cambridge, England, 1990) p206.
[7] M. Nagano and A. A. Watson, Rev. Mod. Phys.
72
, 689 (2000).
[8] D. Heck
et al.
, Report FZKA 6019, (Forschungszentrum Karl-
sruhe 1998).
[9] X. Bertou
et al.
, Astropart. Phys.,
17
, 183 (2002).
[10] O. E. Kalashev, V. A. Kuzmin, D. V. Semkoz, and G. Sigl, Phys.
Rev. D
66
063004 (2002).
[11] S. Yoshida, G. Sigl and S. Lee, Phys. Rev. Lett.
81
, 5505 (1998).
[12] Andrea Silvestri (IceCube), Proc. 31st ICRC (2009).
[13] J. Abraham
et al.
(Auger), Phys. Rev. Lett.
100
, 21101 (2008).
[14] P. W. Gohram
et al.
(ANITA), arXiv:0812.2715v1 (2008).
[15] I. Kravchenko
et al.
(RICE), Phys. Rev. D
73
, 082002 (2006).
[16] R. U. Abbasi
et al.
(HiRes), ApJ
684
, 790 (2008).
[17] R. Engel
et al.
, Phys. Rev. D
64
, 093010 (2001)
PROCEEDINGS OF THE 31
st
ICRC, ?OD´ Z´ 2009
1
Study of very bright cosmic-ray induced muon bundle signatures
measured by the IceCube detector
Aya Ishihara
?
for the IceCube Collaboration
y
?
Department of Physics, Chiba University, Chiba 263-8522, Japan
y
See the special section of these proceedings.
Abstract
. We present the study of cosmic-ray in-
duced atmospheric muon signatures measured by
the underground IceCube array, some of which
coincide with signals in the IceTop surface detector
array. In this study, cosmic-ray primary energies are
associated with the total number of photoelectrons
(NPEs) measured by the underground IceCube opti-
cal sensors with two methods. We found that multiple
muons that produce
10
4
˘ 10
5
NPEs in the IceCube
detector in 2008 is corresponding to the cosmic-ray
primary energies of
10
7
˘ 10
9
GeV.
This association allows us to study cosmic-ray
physics using photon distributions observed by the
underground detector that are characterized by the
properties of muon bundles. It is observed that
the detailed NPE space distributions in longitudinal
and lateral directions from muon tracks display the
ranging-out effect of low energy muons in each
muon bundle. The distributions from 2008 high
energy muon data samples taken with the IceCube
detector are compared with two different Monte
Carlo simulations. The ?rst is an extreme case that
assumes a single high energy muon in which nearly
all of the energy loss is due to stochastic processes
in the ice. The other uses the CORSIKA program
with SYBILL and QGSJET-II high energy hadron
interaction models, in which approximately half of
the energy loss is due to ionization of low energy
muons.
Keywords
: IceCube, muon-bundle, high-energy
I. INTRODUCTION
Bundles of muons produced in the forward region
of cosmic-ray air showers appear as bright signals in
Cherenkov detectors. The multiple-muon tracks with
a small geometrical separation (called `muon-bundles`)
resemble a muon with a higher energy. Understanding
of the background muon bundles using a full air shower
MC simulation in the high energy range above 10
7
GeV is limited because the calculation involves poorly
characterized hadronic interactions and a knowledge on
the primary cosmic ray composition at energies where
there is no direct measurement available. The experi-
mental measurement of atmospheric muons provides an
independent probe of the hadronic interactions and the
primary cosmic-ray compositions.
The IceCube neutrino observatory [1] provides a rare
opportunity to access the primary cosmic-ray energies
beyond accelerator physics. The IceCube detector lo-
cated at the geographic South Pole consists of an array of
photon detectors which contains a km
3
?ducial volume
of clean glacier ice as a Cherenkov radiator. Half of the
?nal IceCube detector (IC40) was deployed by the end
of austral summer of 2008. The IC40 detector consists
of 40 strings of cable assemblies with an intra-string
spacing of 125 m. Each string has 60 optical sensors
(DOMs) spacing at intervals of ˘17 m and stretching
between depths of ˘1450 m and ˘2450 m in the glacial
ice. DOMs are also frozen into tanks located on the
surface near the top of each string. The ice-?lled tanks
constitute an air shower array called IceTop [2]. IceTop
can act as an independent air-shower array to measure
cosmic-ray spectra as well as trigger simultaneously
with the underground detector. This provides a reliable
method to study the atmospheric muon bundles.
The data taking with the IC40 detector con?guration
was performed from April, 2008 through March, 2009.
The high energy muon-bundle (HEMu) sample consists
of events which measure between 6:3?10
2
and 6:3?10
4
photo-electrons (PEs) in at least 50 underground DOMs.
An IceTop coincidence (HECoinc) sample is a subset of
the HEMu sample with the additional requirement that
IceTop can successfully reconstruct the air shower event.
Similarly, samples (called VHEMu and VHECoinc) with
higher NPE threshold of 7:0 ? 10
3
are studied. De?ni-
tions of samples are summarized in Table I.
Data studied in this paper is taken in the period of July
to December 2008 with a livetime of 148.8 days. Event
distributions of the samples are presented in Fig. 1.
TABLE I
DEFINITIONS OF SAMPLE CONDITIONS.
threshold NPE value
IceTop coincidence required
HEMu
6:3 ? 10
2
no
HECoinc
6:3 ? 10
2
yes
VHEMu
7:0 ? 10
3
no
VHECoinc
7:0 ? 10
3
yes
II. COSMIC-RAY ENERGY AND UNDERGROUND
BRIGHTNESS RELATION
Because the energy losses of muon-bundles are indica-
tors of their energies and multiplicities, measurements of
the total energy deposit of muons (E
loss
) in the detection
volume is important for understanding of the nature of
muon-bundles. Here, we use the total number of photo-
electrons recorded by the all underground DOMs (NPE)
as an indicator of E
loss
. The effective light deposit from
2
VERY HIGH ENERGY MUONS IN ICECUBE
log
10
(NPE)
2.8 3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8
Number of events
10
-1
1
10
10
2
10
3
10
4
10
5
10
6
10
7
Declination at a depth of 1500m [deg]
0
5
10
15 20
25 30
35 40
45
10
-1
1
10
10
2
10
3
10
4
10
5
10
6
10
7
Fig. 1. Atmospheric muon event distributions from 2008 sample as a
function of NPE (left) and reconstructed zenith angle ? (right). Filled
square denotes HEMu and triangles are HECoinc. Inverse triangles and
open circles are that of VHE samples as de?ned in table I. Coincidence
samples show a high detection ef?ciency for vertical events and the
ef?ciency drops with zenith angles. Event rates decreased by ˇ 2.5
orders of magnitude when NPE is increased by an order of magnitude.
bundles can be parameterized with an effective track
length l
0
as [3], [4],
NPE ˘ l
0
(?N
?
+ ˘?E
?
) / E
loss
:
(1)
Here, N
?
and ?E
?
indicate multiplicities and energy
sum of underground muons respectively. ? and ˘ are
ionization and radiative energy loss coef?cients assumed
to be constant with energy. Primary cosmic-ray energies
are related to the NPE with two methods. The ?rst
method is to directly relate the underground NPEs with
IceTop cosmic-ray energy reconstruction results. The
other is to construct an empirical model to characterize
the event frequencies of underground NPEs from the
experimentally measured cosmic-ray surface ?uxes [5].
The former method has the advantage that both cosmic-
ray energy and underground brightness are consistently
measured quantities, while the directional acceptance is
limited to near vertical. The latter method requires a
model assumption in the underground bundle spectra
shape but full angular acceptance is available.
A. IceTop coincidence signals
Figure 2 shows the measured underground NPE distri-
bution as a function of cosmic-ray energies reconstructed
by the IceTop air-shower array. The energy determina-
tion method by the IceTop array is described in [6]. A
clear correlation exhibits that bright underground events
are associated with the high energy cosmic-ray induced
air showers and each NPE region roughly corresponds
to different cosmic-ray energy regimes. For example, it
shows that the cosmic-ray primary energy of ˘ 3:0?10
7
GeV are associated with 10
4
NPE underground events.
As shown in Fig. 1, because of the IceTop coincidence
condition, most of events in this sample is near vertical.
B. The empirical model
A high energy muon empirical model is constructed as
in [7]. In the model construction, the amount of energy
log
10
IceTop CR Energy/GeV
56789
InIce NPE
10
log
3
3.5
4
4.5
1
10
10
2
10
3
Fig. 2.
Event distributions of HECoinc sample as a function of
NPE and IceTop reconstructed primary cosmic-ray energies. A clear
correlation is observed.
that goes to muon-bundle from cosmic-ray primaries
is expressed in terms of energy weighed integral of
the Elbert formula [8]. Because a major part of NPEs
from muon tracks is expected to be due to the radiative
processes in the very bright events, it is assumed in
the model that the NPEs from the ionization is neg-
ligible compared to the stochastic energy losses,
i.e.
N
?
= 1 in Eq. 1. We then ?t experimental data with
this model by varying ?E
?
in Eq. 1 until it reproduces
the experimentally observed NPE event rates. The total
energy in the bundle ?E
?
is carried by a single muon
and the muon is simulated with [9]. The model is
constructed based on the data sample taken in 2007.
The present sample from 2008 under study separately
con?rms the agreement as shown in Fig. 3 above the
NPE threshold of 7:0 ? 10
3
. Below the threshold value,
the model assumption that nearly all energy losses are
due to radiative processes is expected to fail. The relation
between the true cosmic-ray energy and NPE is shown
in Fig. 4. The relation shows reasonable agreement with
the experimentally measured relation shown in Fig. 2 in
the overlapped acceptance region. An extrapolation of
the relation indicates that corresponding primary cosmic-
ray energy is increased to 10
9
GeV for the muon bundle
signals with 10
5
underground NPE.
III. ENERGY LOSSES OF MUONS IN BUNDLES
A. Muon spectra in bundles
Average muon spectra in a bundle for different to-
tal NPE range from CORSIKA MC simulation using
SYBILL and QGSJET-II as high energy interaction
models with iron primaries and corresponding single
muon energy distribution from the empirical model are
shown in Fig 5. The plot shows that the number of
muons reaching the IceCube depth from CORSIKA
simulations increase with their total NPE. While there is
a large difference between the muon bundle spectra from
the CORSIKA full air-shower simulations and the high
energy single muon empirical model, both describe the
NPE event rates with a reasonable agreement (Fig 3).
There is no signi?cant difference in muon spectra from
PROCEEDINGS OF THE 31
st
ICRC, ?OD´ Z´ 2009
3
log
10
(NPE)
3
3.2
3.4
3.6
3.8
4
4.2
4.4
4.6
4.8
Number of events
10
10
2
10
3
10
4
10
5
10
6
Declination at a depth of 1500m [deg]
0
5
10
15
20
25
30
35
40
45
10
10
2
10
3
10
4
10
5
10
6
10
7
10
8
Fig. 3.
Event distributions as a function of NPE (left) and recon-
structed zenith angle ? (right). Squares and inverse triangles denote
2008 high energy event sample as in Fig. 1. Filled histograms are
from the Monte Carlo simulation of the high energy muon empirical
model as described in the text. Dark and light colored histograms
are from CORSIKA MC simulation using SYBILL and QGSJET-II
as high energy interaction models with iron primaries respectively.
Event distributions from proton primaries highly underestimate the
event rates. It can be seen that all of three MC simulation gives a
reasonable agreement with experimental observation.
log
10
cosmic-ray energy /GeV
6
6.5
7
7.5
8
8.5
9
9.5
NPE
10
log
4
4.2
4.4
4.6
4.8
5
1
10
10
2
Fig. 4.
The correlation between primary cosmic-ray energy to
underground NPE from MC simulation with the high energy muon em-
pirical model. A consistent relation obtained with IceTop/underground
coincidence measurement is obtained.
SYBILL and QGSJET-II high energy interaction models
with iron primary below 4:0 ? 10
4
NPE, but they
exhibits some difference for the brighter events which
approximately corresponds to the primary cosmic ray
energies above ˘ 10
8
GeV.
The fact that the event rates as a function of the
total NPE appear consistent among the three estimations
with different muon bundle models indicates that the
NPEs of an event insensitive to the energy spectra of
muon bundles. It implies that to distinguish whether
the observed photon emission is dominated by either
the ?rst or the second term in Eq. 1 is dif?cult with
the total NPE. This indicates that the NPE measure is
a systematically robust variable when used in analysis
as in [7]. On the other hand, to evaluate muon bundle
structure in each event, this variable is not suf?cient.
The nature of muon bundles, such as the muon spectra
log
10
muon energy at depth /GeV
01234567
Number of muons at IceCube depth
0
50
100
150
200
250
SYBILL / Iron
QGS-II / Iron
4.0 < log
10
NPE < 4.4
4.4 < log
10
NPE < 4.6
4.6 < log
10
NPE < 5.0
emperical
single muon model
´
100
Fig. 5. Average muon MC-truth energy spectra in a bundle in different
NPE range are shown for SYBILL, QGSJET-II with iron primaries
and the empirical single muon model which is multiplied by 100 for
a better visibility. Each of solid and dashed lines represents different
NPE regions which approximately correspond to different cosmic-ray
primary energies as shown in Fig. 4. In the brightest events, both
CORSIKA high-energy models predicts more than 5,000 muons in
a bundle reaching the underground detector. The muon in the single
muon empirical model has energies between 100 TeV and 10 PeV.
as in Fig. 5, is expected to appear in more detailed NPE
distributions along the muon bundle tracks.
B. The lateral and longitudinal NPE distributions
The NPE distributions as functions of distances along
and perpendicular to the track are shown in Fig. 6.
In the plots, only vertically reconstructed events (? ?
15 degrees) are used. Vertical tracks are suitable for
measurement of detailed longitudinal development of
the energy losses because the DOM separation in the
z direction is only 17 m compared to 125 m in x-y
direction. The detected Cherenkov photon pro?le shows
a good correlation with the depth dependence of the
measured optical properties of glacier ice. Fig. 6a shows
a typical 3-dimensional NPE distributions of an observed
high energy muon-bundle track. The lower panels shows
averaged NPE distributions in the 2D plane from vertical
VHEMu events for 2008 data, SYBILL-iron and the
empirical model. There are visible differences in the
2D light deposit distributions between data and models
which give similar NPE. The detailed NPE distributions
can be further examined as a function of longitudinal
distances along tracks at various lateral distances as
shown in the Fig. 7. Each solid line denotes different
lateral distance with a 50 m interval and the distributions
correspond to the slices along the longitudinal distances
in the left panel of the Fig. 6b. It can be seen that
the NPE observed by each DOM decreases rapidly
with lateral distances. The closest longitudinal NPE
distribution (? 50 m) shows that at the upper IceCube
detector ˘800 NPEs are observed in each DOM and
gradually decreased to ˘300 NPEs at the bottom of
detector. This is expected to be due to ranging-out of
low energy muons in bundles as they travel through the
detector. This clearly shows that the longitudinal NPE
pro?les close to the track is sensitive to the muon energy
loss pro?le. The effect is less visible when photons
4
VERY HIGH ENERGY MUONS IN ICECUBE
(a)
lateral distance [m]
0
200
400
600
800
1000
longitudinal distance [m]
-500
0
500
1
10
10
2
10
3
2008 data
lateral distance [m]
0
200
400
600
800
1000
longitudinal distance [m]
-500
0
500
1
10
10
2
10
3
CORSIKA SYBILL
lateral distance [m]
0
200
400
600
800
1000
longitudinal distance [m]
-500
0
500
1
10
10
2
10
3
emperical single muon model
(b)
Fig. 6.
The lateral and longitudinal NPE distributions from high
energy muon-bundle events which produces very bright event signa-
tures. (a) Left: A typical NPE space distributions of a bright event in
2008. The size of squares indicates log
10
NPE. Solid line indicates
the reconstructed direction. There is a loss of photons due to a dusty
layer of ice positioning around z = -100 m. Right: The NPEs from
each DOM are plotted as functions of distances perpendicular to and
along the reconstructed track. Filled bins are the position where the
DOMs exist in this lateral and longitudinal two dimensional space and
z-axis indicates measured NPEs. When there is more than one DOMs
in a bin, NPE averages are calculated. (b) An averaged lateral and
longitudinal NPE distribution of vertical bright events. Left: Vertically
reconstructed VHEMu sample. Middle: CORSIKA-SYBILL with iron
primary. Right: the high energy single muon empirical model.
propagated more than 50 m from the track where the
effects of ice properties begin to dominate. The effect
of the ice layers with different scattering/absorption
properties highly modi?es the lateral NPE distributions
in this case. The distributions of NPEs close to tracks
are suitable to study muon-bundle properties and NPEs
at distance re?ects the nature of photon propagation
through the ice.
IV. OUTLOOK
The various parts of lateral and longitudinal pro?les
of the NPE distributions in 2-dimensional space are
governed by the nature of muon bundles and optical
properties of the ice in different way. Speci?cally, de-
tailed study of the longitudinal NPE pro?les at different
lateral distances is important for a better understanding
of both the muon-bundle and ice property modeling.
The contributions from ionization and radiative en-
ergy losses in the obtained lateral and longitudinal
NPE distributions are not distinguishable so far. This
is because longitudinal NPE pro?les shown in Fig. 7
are obtained from multiple events and stochastic nature
of energy losses are averaged out. However, a large
Longitudinal distance[m]
-600
-400
-200
0
200
400
600
average
10
NPE per DOM
10
2
10
3
0m < lateral distance < 50m
50m < lateral distance < 100m
100m < lateral distance < 150m
Fig. 7.
Averaged longitudinal NPE distributions of the vertical
VHEMu event sample. Each solid line denotes longitudinal NPE
distributions at various lateral distances with an interval of 50 m.
From the top line to the bottom, the intervals corresponding to each
line are 0 m˘50 m, 50 m˘100 m, 100 m˘150 m, 150 m˘200 m,
200 m˘250 m and 250 m˘300 m respectively. A clear NPE devel-
opments in both longitudinal and lateral directions are visible.
difference between ionization and radiative energy losses
is expected to appear in the event-by-event ?uctuations
of longitudinal/lateral NPE distributions. The sizes of
?uctuations from stochastic energy losses are evaluated
in [4] using the MMC program [10] and the ?uctuations
from ionization are expected to be /
p
N
?
.
The deviations of NPE along track from an average
NPE per DOM are contributed from variations of ice
properties. Because the ice properties does not ?uctuate
an event-by-event basis, it is possible to distinguish the
variation due to ice properties and the ?uctuation due
to stochastic energy losses. The variations in NPEs near
the tracks where less affected from ice properties and
also the event-by-event NPE ?uctuation at given depth
are expected to be sensitive parameters to the stochastic
part of the muon bundle energy losses.
V. ACKNOWLEDGMENTS
We acknowledge U.S. National Science Foundation,
and all the agencies to support the IceCube project.
This analysis work is particularly supported by the Japan
Society for the Promotion of Science.
REFERENCES
[1] J. Ahrens
et al.
, Astropart. Phys.
20
507 (2004);
http://icecube.wisc.edu/.
[2] T. Gaisser, in Proceedings of the 30th ICRC, Merida (2007).
[3] Ph.D. thesis, C.H. Wiebusch, Physikalische Institute, RWTH
Aachen (1995)
[4] Ph.D. thesis, Predrag Miocino‹
vic,´ UC Berkeley (2001)
[5] M. Nagano and A. A. Watson, Rev. Mod. Phys.
72
, 689 (2000).
[6] F. Kislat, S. Klepser, H. Kolanoski and T. Waldenmaier for the
IceCube collaboration, these proceedings
[7] K. Mase, A. Ishihara and S. Yoshida for the IceCube collaboration,
these proceedings
[8] T. K. Gaisser,
Cosmic Rays and Particle Physics
, (Cambridge
University Press, 1990).
[9] http://www.ppl.phys.chiba-u.jp/JULIeT/
[10] D. Chirkin and W. Rhode, arXiv:hep-ph/0407075
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
Search for High Energetic Neutrinos from Supernova Explosions
with AMANDA
Dirk Lennarz
¤
and Christopher Wiebusch
¤
for the IceCube Collaboration
y
¤
III. Physikalisches Institut, RWTH Aachen University, 52056 Aachen, Germany
y
See the special section of these proceedings
Abstract. Supernova explosions are among the most
energetic phenomena in the known universe. There
are suggestions that cosmic rays up to EeV energies
might be accelerated in the young supernova shell
on time scales of a few weeks to years, which would
lead to TeV neutrino radiation. The data taken
with the AMANDA neutrino telescope in the years
2000 to 2006 is analysed with a likelihood approach
in order to search for directional and temporal
coincidences between neutrino events and optically
observed extra-galactic supernovae. The supernovae
were stacked in order to enhance the sensitivity. A
catalogue of relevant core-collapse supernovae has
been created. This poster presents the results from
the analysis.
Keywords: AMANDA, high energy neutrino astron-
omy, supernova
I. INTRODUCTION
Almost a hundred years after their discovery, the
acceleration mechanisms and sources of the cosmic
rays remain an unsolved problem of modern astronomy.
Neutrino astronomy can be an important contribution to
the solution of this problem. Young supernovae in con-
nection with a pulsar have been proposed as a possible
source of cosmic rays with energies up to the ankle. This
pulsar model can be directly tested by measuring high
energetic (TeV) neutrino radiation on time scales of a
few weeks to years after the supernova [1][2].
The AMANDA-II neutrino telescope is located in the
clear ice at the geographic South Pole and was fully
operational since 2000. It reconstructs the direction of
high energetic neutrinos by measuring Cherenkov light
from secondary muons. The main background are muons
and neutrinos produced in air showers in the atmosphere.
This analysis uses 7 years of AMANDA data taken
during the years 2000-2006 with a total live-time of 1386
days. The data reconstruction and filtering is described in
[3] and the final event sample contains 6595 events. The
contamination of mis-reconstructed atmospheric muon
events is less than 5% for a declination greater than 5
±
.
II. PULSAR MODEL
The liberation of rotational energy from a pulsar can
accelerate particles to relativistic energies. Secondary
particles, for example pions, are created in the interac-
tion with the expanding supernova envelope and decay
into neutrinos and other particles. In this analysis the
log10(seconds)
1
2
3
4
5
6
7
8
9
10
Arbitrary units
-3
10
-2
10
-1
10
1
Fig. 1. Typical supernova neutrino model light curve
pulsar model as described in [2] is used. Thermonuclear
supernovae have no pulsar inside the envelope and are
therefore not considered by this model.
The phase of powerful, high energetic neutrino emis-
sion is limited by two characteristic times: the time
at which the pion decay time becomes less than the
time between two nuclear collisions (t
¼
) and the time at
which the density of the envelope is sufficiently small
for accelerated particles to escape into the interstellar
space without interaction (t
c
). The supernova neutrino
luminosity as a function of time (model light curve) is
given by:
L(t) =
Ã
1 ¡ exp
Ã
¡
μ
t
c
t
¶
2
!!
¢
1
1+(
t
¼
t
)
3
¢ ¸L
0
μ
1+
t
¿
¶
¡2
;
(1)
where ¸ is the fraction of the total magnetic dipole
luminosity L
0
(in erg/s) that is transferred to accelerated
particles and ¿ the characteristic pulsar braking time.
The shape and length of the model light curve depend
on the supernova envelope mass (M
e
), uniformity (de-
scribed by a parameter called ») and expansion velocity
(V ), the pulsar braking time and the maximum pion
energy. An E
¡2
neutrino energy spectrum is assumed
with an energy cutoff at 10
14
eV. Fig. 1 shows a typical
model light curve for t
¼
¼ 8 £ 10
3
s and t
c
¼ 2 £ 10
6
s.
These values are obtained by choosing M
e
= 3M
ˉ
,
» = 1, V = 0:1c and ¿ = 1year.
2
LENNARZ et al. AMANDA SUPERNOVA SEARCH
III. SUPERNOVA CATALOGUES
For this analysis a catalogue of supernovae was cre-
ated. It combines three different electronically avail-
able and regularly updated SN catalogues [4][5][6].
A comparison of the three catalogues revealed some
inconsistencies in the listed information. A consistent
selection was made with special attention to the objects
mistaken for a supernova observation, the total number
of supernovae and the supernova positions.
Fig. 2 shows the distribution of the 4805 supernovae
observed between 1885 and 2008. The clearly visible
structure around the celestial equator are supernovae
found by the Sloan Digital Sky Survey-II supernova
survey. The nearest and best visible supernova for
AMANDA was SN2004dj in NGC 2403 at a distance
of approximately 3.33 Mpc.
-150
-100
-50
0
50
100
150
-90
90
+90°
-90°
+180°
-180°
Fig. 2. Distribution of observed supernovae in equatorial coordinates
with the galactic plane indicated as dashed line. Due to the background
from atmospheric muons only supernovae in the northern hemisphere
are relevant.
This analysis searches for directional and temporal co-
incidences between neutrinos and supernovae. Therefore
additional input has to be quantified for each supernova.
Firstly, the expected neutrino flux has to be determined
from an accurate distance. The supernova distance can
be identified with the distance to the host galaxy and can
be estimated from the redshift. The redshift estimate is
replaced by a measured distance (e.g. Cepheid variables
or Tully-Fisher relation) if available. This improves the
distance accuracy for nearby supernovae, which are most
relevant.
Secondly, the explosion date is needed for the temporal
correlation, but only the date of the optical maximum
or the discovery date is available. From some well
observed SNe (e.g. 1999ex and 2008D) it is known that
the optical maximum occurs around 15-20 days after
the explosion, which is used as a benchmark. Fig. 3
shows the difference between the date of discovery and
the date of maximum for those cases where the light
curve was fitted to a template and the date of maximum
extrapolated backwards in time or found on old photo
plates. The majority of the supernovae are discovered
within 20 days after the optical maximum. Hence, the
discovery is assumed to be typically 20 days after the
optical maximum. The uncertainty of the explosion date
is accounted for in the likelihood approach.
Number of days
1
10
2
10
Number of SNe
0
2
4
6
8
10
Fig. 3.
Number of days between the optical maximum and the
discovery if the supernova was discovered after the maximum. A linear
one day binning is shown on an logarithmic x-axis.
Thirdly, for the individual supernova the needed input
for the pulsar model is not available. Therefore, all
supernovae are treated equally. The influence of the
model light curve on the analysis is tested by defining
two additional sets of parameters which result in light
curves with very short and long neutrino emission.
Hence, altogether three different model light curves
(typical, short, long) are used. The width of the plateau
that can be seen in Fig. 1 is 12 days for the typical, 1
day for the short and 76 days for the long light curve.
The most realistic assumption for the supernovae in the
catalogue is that they have individual realisations of the
parameters of the pulsar model and therefore individual
light curves between the extreme cases.
IV. LIKELIHOOD APPROACH
A new likelihood approach was developed for this
analysis [7]. Its principal idea is to compare all neutrino
events from the experimental data sample to every rele-
vant supernova and evaluate the likelihood ratio (LHR)
between the hypothesis that this event is signal and
the hypothesis of being background. This yields a large
value for a good and small value for a bad match. The
LHR for all events is summed in order to obtain a
cumulative estimator, called Q:
Q =
X
events
P
SN
p(~ajSN)p(SN)
p(~ajBG)p(BG)
;
(2)
where ~a are characteristic observables of the event.
The advantage of this likelihood definition is that it
can be extended to a stacking analysis. Q automatically
assigns a small weight to irrelevant combinations of
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
neutrinos and supernovae, while relevant ones receive a
larger weight. Thus, all supernovae from the catalogue
can be used in the analysis and no optimisation on the
number of sources is needed. Q is a sum of likelihood
ratios and therefore its absolute value contains no phys-
ical information.
The probabilities in the likelihood sum are constructed
from properties of AMANDA, the experimental data
sample and the considered model light curve. p(BG) is
the probability to have background and is an unknown
but constant factor. This probability is eliminated by
redefining Q to Q ¢ p(BG).
p(~ajBG) is the probability that, assuming an event is
background, it is observed at its specific time and from
its specific direction. It is factorised into a temporal
and an angular part. The temporal part corresponds to
the AMANDA live-time. However, it cancels out with
the corresponding temporal part of p(~ajSN). AMANDA
does not distinguish between signal and background
neutrinos and was obviously taking data when the event
was measured. The angular probability is constructed
with the normalised zenith angle distribution of the
experimental data sample (see Fig. 2 in [3]). The azimuth
probability is constant, because AMANDA is completely
rotated each day and the azimuth is randomised for the
relevant time scales of this analysis.
The supernova signal probability consists of p(~ajSN)
and p(SN). The first part depends on the specific event
and is the probability that an event from a supernova is
observed at a given time from a given direction. p(SN)
is the probability to observe a signal from that supernova
and is estimated for each supernova.
For p(~ajSN) two terms are considered. p(ªjSN) is
the probability that a neutrino from a supernova is
reconstructed with an angular difference ª relative to
the supernova direction. This probability is calculated
from the point-spread function, which is obtained from
Monte Carlo simulations. The second term p(t; t
SN
jSN)
yields the probability that a neutrino arrives with a time
offset t ¡ t
SN
from the explosion date. This probability
is taken from a likelihood light curve.
In order to be less model dependent three generic
likelihood light curves (typical, short, long) are con-
structed. They are inspired by the model light curves and
constructed conservatively in order to not miss signal by
accidentally looking too early or too late. Hence, if the
date of the optical maximum is known, the starting time
for the likelihood light curves (t = 0) is defined to be
30 days earlier. In case only the date of the discovery
is known a 50 days earlier starting time is used. This
makes sure that the explosion is not missed, because the
time shift to the explosion date is overestimated by about
15 days for the optical maximum and up to 35 days for
the date of discovery. The likelihood light curves consist
of a half Gaussian for t < 0, a plateau for t > 0 and
another half Gaussian after the plateau. The length of the
plateau is the full width at 90% of the model light curves
and enlarged by the uncertainty of the explosion day.
This uncertainty is bigger if only the date of discovery
is known. The width of the Gaussian after the plateau
is the full width at half maximum (FWHM) after the
plateau of the model light curves. Fig. 4 shows the
typical likelihood light curve for the date of discovery.
Days
-60 -40 -20 0
20 40 60 80 100 120 140
Probability
0
2
4
6
8
10
12
-3
×10
Fig. 4. Typical likelihood light curve
p(SN) depends on the supernova neutrino luminosity,
distance and direction. The absolute value of p(SN) is
determined by the supernova neutrino luminosity and is
a free parameter of this analysis. However, the absolute
normalisation is not required, because a constant factor
results in a rescaling of Q and hence only relative values
are important. All supernovae are assumed to have the
same neutrino luminosity at source. p(SN) decreases like
the flux with the square supernova distance. AMANDA
is not equally sensitive to neutrinos from all directions.
Therefore the angular acceptance for different supernova
directions is taken into account.
V. SIGNAL AND BACKGROUND SIMULATION
Q distributions for signal and background simulations
are used to construct confidence belts with the Feldman-
Cousins approach to the analysis of small signals [8].
Each simulated data sample contains 6595 signal or
background events like the experimental data set. Back-
ground events are simulated with the zenith angle dis-
tribution of the experimental data and the AMANDA
live-time. For the signal simulation a model light curve
and the AMANDA angular and temporal acceptance is
simulated. The angular acceptance includes a random
simulation of assumed systematic uncertainties of the
measured rate of high energetic muon neutrinos [3].
The simulation of the temporal and angular acceptance
reduces signal events from days with low live-time or
unfavourable supernova directions.
The confidence belts are used to estimate the sen-
sitivity of the analysis. The sensitivity for the long
model light curve is not compatible compared to [3].
Furthermore, if the supernovae have short model light
curves, the sensitivity is comparable for the short and
4
LENNARZ et al. AMANDA SUPERNOVA SEARCH
typical likelihood light curves. The typical pulsar model
is best detected with the typical likelihood light curve.
Therefore the experimental data is analysed with the
typical likelihood light curve, because it can cover a
larger range of possible parameters.
VI. EXPERIMENTAL RESULT
Analysing the experimental AMANDA data with the
typical likelihood light curve yields:
Q
Exp
typical
= 0:0059 :
(3)
Fig. 5 shows this value in a Q distribution for back-
ground only. The p value of obtaining a Q value equal
or bigger than 0.0059 is 73.0%. Hence, the Q value is
consistent with background and no deviation from the
background only hypothesis is found. Therefore upper
limits for the three model light curves are derived.
Value of Q
0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016
Occurence
1
10
2
10
p value = 73.0%
Fig. 5. Q value from analysing the experimental data sample with
the typical likelihood light curve (horizontal line) in a distribution for
background only.
90% upper limits on the signal strength are derived
from the Feldman-Cousins confidence belts. With the
help of the signal simulation explained above this value
converts to the sum of neutrinos from all supernovae.
Assuming the above model ranking of sources and the
stacking this limit can also be interpreted as a limit on
the number of neutrinos from SN2004dj. Tab. I shows
the obtained upper limits for the typical, short and long
pulsar model light curve.
Pulsar model
All SNe
SN2004dj
Typical
< 5:4
< 1:0
Short
< 4:1
< 0:9
Long
< 67:3
< 5:9
TABLE I
90% UPPER LIMITS ON THE NUMBER OF NEUTRINOS FROM ALL
SUPERNOVA AND FROM SN2004DJ.
The event numbers can be converted to a flux by
integrating the AMANDA neutrino effective area with
the expected signal energy spectrum (E
¡2
spectrum with
cutoff at 10
14
eV). Assuming the typical pulsar model
for all supernovae and taking the average of the effective
area over all directions, the 90% upper limit on the flux
from all supernovae for the plateau of powerful neutrino
radiation (12 days) is:
dÁ
dE
¢ E
2
< 5:2 £ 10
¡6
GeV
cm
2
s
:
(4)
Using the effective area for the direction of SN2004dj,
the corresponding 90% upper limit for SN2004dj is:
dÁ
dE
¢ E
2
< 8:4 £ 10
¡7
GeV
cm
2
s
:
(5)
These limits are valid in the energy range from 1.1
TeV to 84.0 TeV.
Assuming that the energy range of the pulsar model
as described in [2] can be extended to higher energies,
the limits would improve by about 30% and are then
valid in the energy range from 1.7 TeV to 2 PeV.
VII. CONCLUSION
For the first time the neutrino emission from young
supernova shells was experimentally investigated. In
the context of the pulsar model no deviation from the
background only hypothesis was found.
For a galactic supernova the expected flux from the
pulsar model should be sufficient to be detectable by
IceCube, the AMANDA successor. The sensitivity of
this analysis might be enhanced by using an energy
estimator in the likelihood and the individual event re-
construction error instead of the energy averaged point-
spread function.
REFERENCES
[1] V. S. Berezinsky and O. F. Prilutsky, Pulsars and Cosmic Rays in
the Dense Supernova Shells, A&A, 66 (1978), no.3, pp. 325-334
[2] H. Sato, Pulsars Covered by the Dense Envelopes as High-
Energy Neutrino Sources, Progr. Theor. Phys., 58 (1977), no.2,
pp. 549-559
[3] J. Braun et al., Search for point sources of high energy neutrinos
with final data from AMANDA-II, Phys. Rev. D, 79 (2009), no.6,
pp. 062001-1 - 062001-15
[4] List
of
supernovae
from
the
CBAT,
http://cfa-
www.harvard.edu/iau/lists/Supernovae.html
[5] Asiago Supernova Catalogue, http://cdsarc.u-strasbg.fr/viz-
bin/Cat?B/sn
[6] Sternberg Astronomical Institute (SAI) Supernova Catalogue
http://www.sai.msu.su/sn/sncat/
[7] D. Lennarz, Search for High Energetic Neutrinos from Supernova
Explosions with the AMANDA Neutrino Telescope, Diploma
thesis, RWTH Aachen University (2009)
[8] G. J. Feldman and R. D. Cousins, Unified Approach to the
Classical Statistical Analysis of Small Signals, Phys. Rev. D,
57 (1998), no.7, pp. 3873-3889
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
Search for Ultra High Energy Neutrinos with AMANDA
Andrea Silvestri
∗
for the IceCube Collaboration
†
∗
Department of Physics and Astronomy, University of California, Irvine, CA 92697, USA.
†
See the special section of these proceedings.
Abstract. We present results from the search for
diffusely distributed Ultra High Energy (UHE) neu-
trinos performed on data collected in 2003-2005 with
the AMANDA experiment. At energies above a few
PeV the Earth is opaque to neutrinos, therefore
neutrinos must be differentiated from downward
going cosmic ray induced (bundles of) muons. A
search for a diffuse flux of UHE neutrinos shows
no events, leading to a flux limit, summed over
all flavors E
2
Φ
ν
≤ 8.4 × 10
−8
GeV cm
−2
s
−1
sr
−1
(90% confidence level) for 10
15.2
eV < E
ν
< 10
18.8
eV. This limit is the most stringent placed to date.
A number of model predictions different from the
E
−2
spectrum have been tested and some have been
rejected at a 90% C.L. We show that these results
can also place a limit on the flux from point sources
in the Southern Sky as a function of declination and
valid in the same energy range.
Keywords: Diffuse sources, high energy neutrinos,
AMANDA
I. INTRODUCTION
Neutrino production from Active Galactic Nuclei
(AGN), and other astrophysical sources have been exten-
sively modeled during the past two decades, as described
in [1], [2], [3]. Super-massive black holes hosted in
the AGNs would accelerate, via a first-order Fermi
mechanism, charged particles to ultra high energies. The
collision of ultra-relativistic protons with the photon
field in the AGN, via pγ and pp-interactions, would then
produce high-energy neutrinos. The predicted intensity
of neutrinos from these astronomical sources can reach
the Earth and be detected by underground neutrino
telescopes. Other theoretical calculations as presented
in [4], [5], [6] and [2], derive an upper bound to
the expected neutrino fluxes from high energy cosmic
ray observations. These predictions, based on a model-
independent approach, provide also a target for neutrino
detector sensitivities. The predicted upper bound (ν
µ
and ν¯
µ
combined) for an E
−2
spectrum is E
2
Φ
W B
ν
≤
2 × 10
−8
ξ
z
GeV cm
−2
s
−1
sr
−1
, where ξ
z
accounts for
cosmological model and source evolution. Using cos-
mological dependence and source evolution that follows
star formation rate over cosmological time gives ξ
z
∼ 3.
Assuming ν
µ
/ν
e
= 0.5 at the source, produced from pγ
and pp-collisions, the upper bound on the total flux for
all ν− flavors becomes E
2
Φ
W B
ν
∼ 9 × 10
−8
GeV cm
−2
s
−1
sr
−1
.
II. ANALYSIS
The analysis results presented in this paper incorpo-
rate 3-year of AMANDA data collected in 2003-2005
(detector live-time of 507 days) and are based on work
for one year analysis [7] of 2003 data. The Antarctic
Muon And Neutrino Detector Array (AMANDA) [8],
is the first neutrino telescope constructed in transparent
ice, and deployed between 1500 m and 2000 m beneath
the surface of the ice at the geographic South Pole
in Antarctica. The AMANDA detector uses the Earth
to filter out muons generated in the atmosphere on
the Northern hemisphere and to search for point and
diffuse sources of neutrinos with upward going direction
at TeV to 100 TeV energies. However, at energies
above PeV the Earth is opaque to neutrinos, therefore
ν’s must be differentiated from the large background
(billions of events per year) of downward going cosmic
ray induced (bundles of) muons, which constitutes the
primary challenge of this analysis. AMANDA has been
taking data with the same detector configuration since
2000, and the data acquisition electronics was upgraded
in 2003 by recording full waveforms of the photo-
electron (p.e.) pulses from the photomultiplier tubes
(PMT) using Transient Waveform Recorders (TWR) [9].
The entire 2003 data set of the AMANDA TWR tech-
nology was processed, calibrated and analyzed to per-
form an atmospheric neutrino analysis and a search for
point sources in the Northern hemisphere sensitive at
TeV energies [10], [11], which demonstrated the basic
capabilities of the novel system to reproduce comparable
physics results of the standard system of the AMANDA
detector. After demonstrating the physics performance of
the TWR technology, the analysis is performed to search
for diffusely distributed neutrinos above PeV energies.
The full waveforms from the PMT’s provide far more
information on the light distribution from complex high
energy events. However, the new technology produced
∼ 85 TB in 3 years, more than an order of magnitude
increase w.r.t. the standard AMANDA system. To meet
the challenge of large data structure and to simulate
comparable data volume new analysis strategies were
developed using high performance computing resources.
The resources required for this analysis exceeded 2M
CPU hours.
The UHE analysis is performed by using the infor-
mation of multiple p.e.’s from the PMT waveforms. The
initial level of the analysis is defined by eliminating
over 90% of the background by retaining only events
2
A. SILVESTRI et al. SEARCH FOR UHE NEUTRINOS
NN
2
0
0.2
0.4
0.6
0.8
1
1.2
arbitrary unit
10
−2
10
−1
1
10
10
2
3
10
data 507 days
bg mc
signal mc
ν
/GeV)
10
(E
log
5
6
7
8
9
10
11
12
)
2
(m
eff
A
10
−2
10
−1
1
10
10
2
3
10
10
4
ν
μ
ν
e
ν
τ
ν
μ
ν
e
ν
τ
ν
μ
ν
e
ν
τ
Fig. 1. Left panel: the neural network (NN2) distribution plotted for data, background and signal simulation. Right panel: detector effective
area for muon, electron and tau neutrinos as a function of neutrino energy.
with large number of p.e. pulses recorded in the array.
After this level the analysis is refined by developing
two independent neural networks. The first neural net-
work mostly incorporates variables from reconstructed
events, i.e. the reconstructed zenith angle of the events,
which can separate downward going muons from signal
mostly concentrated at the horizon. The second neural
network uses primarily time dependent variables, like
spread in leading edge of the arrival time and time-over-
threshold values of the p.e. pulses, which at the higher
level of the analysis better discern signal from high
energy bundles of atmospheric muons. Variables were
developed that exploit the full PMT waveforms, which
in turn strongly correlate to signal features and better
separate signal-like background events. Selection criteria
based on single variable discriminators, like the number
of photon-electrons were tested, but demonstrated not
to be efficient for retaining signal events. Therefore a
new set of variables were developed, which depend on
timing and energy of typical signal events [7]. The new
developed variables which use multiple photon-electrons
in the PMT waveforms are the fluctuation of the time-
over-threshold incorporated in the standard deviation of
the tot’s (σ
tot
), the mean of the leading edge times of
the photon-electron pulses (µ
le
), and the fluctuation from
the standard deviation of the leading edge times (σ
le
).
Simulation of signal shows that distant UHE ν events
may not deposit much light in the detector, but the spread
in leading edge arrival time σ
le
and time-over-threshold
σ
tot
values is large compared to typical background
events. Background events tend to have large number of
muons with relatively small lateral dimensions, which
traverse through or close to the detector. Consequently,
the arrival time of background photons shows little
spread in time. On the other hand, signal events with
comparable values of number of photon-electrons do
not pass close to the detector. Therefore, these events
differ from background because the photons show large
variability in the arrival times. To further improve back-
ground rejection, neural networks were developed and
trained. The most powerful neural network included
variables that measure the mean spread in leading edge
times and fluctuations of time-over-threshold values. The
search is performed with a blind analysis, i.e. 20% of
the data sample is used to compare data with simulation
while cut optimization is based on simulation solely.
Once the analysis criteria are established, the cuts are
frozen and applied to the remaining 80% of the data.
Fig. 1 left panel shows the neural network (NN
2
) before
the final cut level for the combined data set, for the
simulated atmospheric background and for the neutrino
signal following an E
−2
spectrum.
Background simulation was performed by generating
primary cosmic ray using the CORSIKA package [12],
propagating particles through the ice with the pro-
gram MMC [13], recording detector response using
the program AMASIM [14] with description of depth-
dependent properties of Antarctic ice [15], and including
proper treatment of waveform data. Similarly, neutrino
signal simulation was performed for all flavors using the
program ANIS [16]. Background simulation was biased
in energy and spectrum towards high energy events to
accommodate available computing resources. The final
cut on NN
2
was determined by evaluating the model re-
jection factor (MRF), as described in [17], and comput-
ing the minimum of the ratio MRF = ?µ
90
(n|b)?/n
sig
.
The ?µ
90
(n|b)? is the average 90% C.L. upper limit,
determined by using the Feldman-Cousins method [18],
computed over the Poisson probabilities for the exper-
iment repeated many times, and n
sig
is the number of
signal events for a given model. The minimum of the
MRF determines the cut which is set to NN
2
> 0.85.
After the final cut one experimental event is observed
over a detector live-time of 507 days, consistent with
0.9 (−0.9, +1.3) events from background expectation.
III. RESULTS
The search for a diffusely distributed flux of UHE
neutrinos shows no signal events, leading to a prelimi-
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
ν
(GeV))
10
(E
log
3
4
5
6
7
8
9
10
11
12
13
)
-1
sr
-1
s
-2
dN/dE (GeV cm
2
E
10
-8
10
-7
10
-6
preliminary
AMANDA (this work)
AMANDA [24]
AMANDA [26]
IceCube [29]
Rice [23]
HiRes [25]
Auger [28]
Anita [27]
Waxman and Bahcall x 3/2
AMANDA (this work)
Mannheim 95 RL
Mannheim 95 PG
Waxman and Bahcall 99
MPR 00
Stecker 05
ESS 01
AMANDA (this work)
Mannheim 95 RL
Mannheim 95 PG
Waxman and Bahcall 99
MPR 00
Stecker 05
ESS 01
Fig. 2. Experimental limits of this analysis on a diffuse E
−2
ν-flux for all flavors as a function of neutrino energy, thick solid line. Solid
lines represent experimental limits from other experiments [23], [24], [25], [26], [27], [28], [29]. Dotted curves represent model predictions for
a diffuse ν-flux, and predictions have been adjusted for all flavor neutrino contribution, where necessary.
nary flux limit, summed over all flavors
E
2
Φ
ν
≤ 8.4 × 10
−8
GeV cm
−2
s
−1
sr
−1
(1)
at 90% C.L. for the energy interval 10
15.2
eV < E
ν
<
10
18.8
eV, defined by the 90% containment of the final
neutrino energy distribution, which has a median energy
of ?E
ν
? = 4 × 10
16
eV. Fig. 1 right panel shows the
detector effective area for all flavor neutrinos for an E
−2
spectrum as a function of neutrino energy, which for
muon neutrinos reaches 100 m
2
for 100 PeV and rapidly
increases with energy.
The limits are computed including the contribution of
systematic uncertainties by using the method described
in [19]. Tab. I summarizes the different sources of
systematic uncertainties which impact background and
signal simulation in this analysis: The same numbers as
given in the table is then repeated in the text. Differences
in the simulation for cosmic ray composition by generat-
TABLE I
SUMMARY OF SYSTEMATIC UNCERTAINTIES ESTIMATED FROM
DIFFERENT SOURCES IMPACTING BACKGROUND AND SIGNAL
SIMULATION.
Source
bg
signal (E
−2
)
CR comp. and inter. models
±80%
-
detector sensitivity
±15%
±15%
year-to-year detector variation
±14%
±14%
tot-factor for d > 200 m
-
+10%
ice properties
±20%
±20%
charm BG
negl.
-
tot-corr. and N2-laser cal.
±100%
±10%
neutrino cross section [20]
-
±4%
LPM effect
-
−3%
total (added in quadrature)
±131%
±32%
TABLE II
SUMMARY OF MODEL PREDICTIONS TESTED BY THIS ANALYSIS.
MODELS WITH A MRF < 1 ARE EXCLUDED AT 90%, WHILE
MODELS WITH A MRF > 1 ARE CONSISTENT WITH THESE
RESULTS.
Model
ν
all
MRF
Reference
AGN RL A-jet
1.10
3.05
Mannheim 95 PG [1]
AGN RL B-jet
17.8
0.19
Mannheim 95 RL [1]
AGN-jet
14.6
0.23
MPR 00 [2]
AGN-core
3.12
1.07
Stecker 05 [3]
Waxman-Bahcall
4.04
0.83
WB 99 [4]
GZK mono-energetic
5.50
0.61
KKSS 02 [21]
GZK index α=2
4.68
0.72
KKSS 02 [21]
GZK full evol.
0.28
12.0
ESS 01 [22]
ing proton and iron CR primaries, and interaction models
by using two different hadronic models (QGSJET and
SIBYLL) were used to estimate variations in background
event rate; Uncertainties in detector sensitivity which
mostly depend on the absolute sensitivity of the PMT’s,
were also included; Variations have been estimated due
to the difference in detector response observed for the
three years studied in the analysis; Variations in the
spread of the time-over-threshold for distances d > 200
m were evaluated for the impact on signal efficiency;
Studying ice properties with two different models gave
a max variation of 16% in signal sensitivity; The impact
on systematic uncertainties due to ice properties has
been further studied by varying the length of photon
propagation in ice for distances characteristic of high
energy signal, and by incorporating this variation into
the effect of detector sensitivity to estimate the impact
on signal sensitivity; Background from charm production
has been estimated to be negligible in this analysis;
4
A. SILVESTRI et al. SEARCH FOR UHE NEUTRINOS
sin(δ)
-1
-0.8 -0.6 -0.4 -0.2
0
0.2
0.4
0.6
0.8
1
)
-1
s
-2
GeV cm
-6
dN/dE (10
2
E
10
-2
10
-1
1
10
AMANDA (this work)
preliminary
MACRO [30]
Super-K [31]
AMANDA [32]
Fig. 3.
Point flux limits as a function of declination sin(δ) for
the Southern Sky averaged over azimuth, solid line. Also included are
limits from other experiments [30], [31] and for the Northern Sky [32].
Variations in the N
2
-laser calibration for the spread of
the time-over-threshold were estimated for background
event rate and signal efficiency; Uncertainties in the
neutrino cross section [20] for energies relevant for this
analysis were incorporated, and impact due to LPM
effect for signal above 10
8
GeV were also included. The
estimated systematic uncertainties have been added in
quadrature and incorporated in the final results of the
analysis.
The diffuse limit has been used to test a number
of model predictions different from the E
−2
spectrum.
Model predictions with a ratio ?µ
90
(n|b)?/n
sig
< 1 are
excluded by this analysis. The models tested and the
corresponding MRF have been summarized in Tab. II. A
class of AGN predictions based on jet-models scenario,
such as [1] (RL B) and [2] have been excluded,
while prediction [1] (RL A) is not, and AGN prediction
based on core-models scenario [3] is almost excluded.
From the class of models excluded by this analysis we
can conclude jet-models normalized to diffuse x-ray or
GeV/TeV emission from individual sources are generally
disfavored. These limits are consistent, and below the
maximum upper bound to neutrino flux predicted by [4],
[6], and also below the maximum neutrino flux due to
possible extra-galactic component of low-energy protons
of 10
17
eV [5]. These results are also consistent and
below the bounds on neutrino fluxes presented by [2],
computed by assuming optically thin (thick) sources
to pion photo-production processes. Models on GZK
neutrino spectrum were also tested, predictions [21] are
excluded, while prediction [22] is still compatible with
these results. The limits from this work to an E
−2
neutrino flux as a function of energy are shown in Fig. 2,
thick solid line. Model predictions are represented by
dotted curves, and solid lines show limits presented by
other experiments [23], [24], [25], [26], [27], [28], [29].
At UHE energies this analysis is sensitive to search
for point source of neutrinos in the Southern Sky.
Simulation shows that muons are reconstructed with
angular resolution of ∼ 7
o
over the entire Southern
hemisphere. Except for a small band near the horizon,
signal originating from the Southern Sky will be ob-
served in the Southern Sky. The sensitivity only depends
on zenith angle and is roughly independent of azimuth,
and maximum sensitivity peaks at the horizon [7]. Since,
no excess of events were observed, a flux limit as a
function of declination is derived, Fig. 3, and fitted
with a function of δ, as E
2
dN/dE
ν
(sin δ) ≤ [1.3 ×
e
−(2 sin δ)
]×10
−7
GeV cm
−2
s
−1
with 10% accuracy,
valid for −0.98 < sin(δ) < 0, and for 10
15.2
eV <
E
ν
< 10
18.8
eV. These point flux limits are valid for
energies above PeV, and are compatible with results from
other experiments, which cover lower energy intervals
between 10 GeV - 100 TeV [30], and between 10 GeV
- 100 GeV [31].
To summarize, we have presented in this paper the
most stringent limits to date for neutrino energies above
1 PeV. These experimental limits begin to restrict the
largest possible fluxes of the WB upper bound [4], [5],
[6].
IV. ACKNOWLEDGMENT
The author acknowledges support from U.S. National
Science Foundation-Physics Division, and the NSF-
supported TeraGrid system at the San Diego Supercom-
puter Center (SDSC).
REFERENCES
[1] K. Mannheim, Astropart. Phys. 3, 295 (1995).
[2] K. Mannheim et al., Phys. Rev. D 63, 023003 (2001).
[3] F. Stecker, Phys. Rev. D 72, 107301 (2005).
[4] E. Waxman and J. Bahcall, Phys. Rev. D 59, 023002 (1999).
[5] J. Bahcall and E. Waxman, Phys. Rev. D 64, 023002 (2002).
[6] E. Waxman, Phil.Trans.Roy.Soc.Lond. A 365, 1323 (2007).
[7] A. Silvestri, Ph.D. Thesis, University of California, Irvine (2008).
[8] E. Andres et al., Nature 410, 441 (2001).
[9] W. Wagner et al., Proc. 28th ICRC, 2, 1365 (2003).
[10] A. Silvestri et al., Proc. 29th ICRC, 5, 431 (2005).
[11] A. Silvestri et al., Mod. Phys. Lett. A 22, 1769 (2007).
[12] D. Heck, DESY-PROC-1999-01, 227 (1999).
[13] D. Chirkin and W. Rhode, hep-ph/0407075 (2004).
[14] S. Hundertmark,Proc.1st νTelescopes Workshop, Zeuthen (1998).
[15] M. Ackermann et al., J. Geophys. Res. 111, D13203 (2006)
[16] M. Kowalski and A. Gazizov, Comput. Phys. Comm. 172, 203
(2005).
[17] G. Hill and K. Rawlins, Astropart. Phys. 19, 393 (2003).
[18] G. J. Feldman and R. D. Cousins, Phys. Rev. D 57, 3873 (1998).
[19] G. Hill, Phys. Rev. D 67, 118101 (2003).
[20] A. Cooper-Sarkar and S. Sarkar, JHEP 75, 801 (2008).
[21] O. E. Kalashev et al., Phys. Rev. D 66, 063004 (2002).
[22] R. Engel et al., Phys. Rev. D 64, 093010 (2001).
[23] I. Kravchenko et al., Phys. Rev. D 73, 082002 (2006).
[24] A. Achterberg et al., Phys. Rev. D 76, 042008 (2007).
[25] K. Martens et al., astro-ph/0707.4417, (2007).
[26] M. Ackermann et al., Astrophys. J. 675, 1014 (2008).
[27] P. Gorham et al., astro-ph/0812.2715, (2008).
[28] J. Abraham et al., astro-ph/0903.3385, (2009).
[29] K. Mase, A. Ishihara and S. Yoshida et al., Proc. 31th ICRC,
these Proceedings (2009).
[30] M. Ambrosio et al., Astrophys. J. 546, 1038 (2001).
[31] S. Desai et al., Astropart. Phys. 29, 42 (2008).
[32] R. Abbasi et al., Phys. Rev. D 79, 062001 (2009).
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
Selection of High Energy Tau Neutrinos in IceCube
Seon-Hee Seo
∗
and P. A. Toale
†
for the IceCube Collaboration
‡
∗
Oskar Klein Centre and Dept. of Physics, Stockholm University, SE-10691 Stockholm, Sweden
†
Dept. of Physics, Pennsylvania State University, University Park, PA 16802, USA
‡
See the special section of these proceedings
Abstract. Astrophysical neutrino sources are ex-
pected to produce electron and muon flavor neutrinos
via charged pion decay. Over cosmological distances,
standard neutrino oscillations will change the flavor
content to include equal fluxes of all three flavors. Tau
neutrinos with energies above a few PeV will produce
characteristic signatures known as double-bangs and
lollipops. In contrast to searches for cosmological
electron and muon neutrinos, which must contend
with backgrounds from atmospheric neutrinos, tau
neutrinos are expected to be background-free. Thus
far no searches for tau neutrino events with these
characteristic signatures have been performed be-
cause their detection requires a kilometer-scale de-
tector. In this talk, we will present current results
from several methods for searching for high energy
tau neutrinos in IceCube.
Keywords: Tau neutrinos, Double-bangs, IceCube
I. INTRODUCTION
One of the main research topics in neutrino tele-
scopes such as ANTARES and IceCube is to search for
neutrinos of astrophysical origin. Astrophysical particle
accelerators like AGNs and GRBs may produce high
energy neutrinos [1], [2]. As daughters of charged pion
decay, the emerging neutrinos are expected to have the
flavor flux ratio of 1:2:0 (ν
e
:ν
µ
:ν
τ
). Due to neutrino
oscillations, this neutrino flux is expected to be observed
in the flavor ratio of approximately 1:1:1 on Earth. There
are also models which predict different ratios of neutrino
flux observed on the Earth but they all lead to non-zero
flux of ν
τ
[3], [4].
Here we discuss aspects of a search for high energy
(greater than a few PeV) ν
τ
’s with the IceCube 22-
string array (“IC-22”). High energy ν
τ
’s can leave very
distinctive signatures in the IceCube detector owing to
the very short life time and numerous decay channels of
tau leptons. We denote these signatures “lollipops,” “in-
verted lollipops” and “double-bangs” [5], [6]. Although
high energy ν
τ
’s can traverse the Earth through the
“regeneration” process [7], they typically emerge with
energies too low to create any of the signatures under
study here. The low energy (below PeV) ν
τ
’s can be
detected in 4π in IceCube but are seen as ”cascade-like”
events, which is described elsewhere.
The lollipop and inverted lollipop topologies are char-
acterized by having either the production or decay vertex
Fig. 1.
A simulated double-bang event produced from a primary
neutrino energy of 47 PeV that enters the IC-22 detector with 35
o
zenith angle. The two showers are separated by a tau track of 332 m
long. The colors (online version only) represent the relative hit times,
with red for the earliest hits, blue for the latest hits, and other times
in between according to the colors of the rainbow.
of the tau lepton well outside the detector fiducial vol-
ume, respectively. In these topologies we expect to see
a track due to the tau lepton and a single shower at the
contained vertex. The double-bang topology as shown
in Fig. 1 is characterized by having both production
and decay vertices contained within the detector fiducial
volume, and the tau track long enough to clearly separate
the two showers from one another.
These astrophysical high energy ν
τ
events are con-
taminated much less by atmospheric background from
cosmic ray interactions compared to ν
µ
and ν
e
[8]. This
is because the conventional atmospheric ν
τ
flux is nearly
zero, and the prompt ν
τ
flux produced from charmed
meson decay in the atmosphere is also expected to be
very small [9], [10].
II. SIGNATURE BASED SEARCH METHOD
In IC-22, the search for high energy ν
τ
does not incor-
porate full event reconstruction [11], but instead relies
on a simpler approach that exploits the unique signatures
2
S. H. SEO et al. TAU NEUTRINOS IN ICECUBE
time (ns)
9500 10000 10500 11000 11500 12000 12500 13000 13500 14000 14500
Q per DOM
0
200
400
600
800
1000
1200
1400
1600
1800
sliding time window
Fig. 2. Charge (number of photo-electrons) per DOM distribution as
a function of light arrival time (ns) for a simulated inverted lollipop
event produced from a primary neutrino energy of 64 PeV. The initial
peak corresponds to a shower from ν
τ
charged-current interaction,
followed by a tau track.
of these events. An example of a simple criterion is
given in Fig. 2 which shows the distribution of detected
charge (proportional to the amount of Cherenkov light)
per digital optical module (DOM) as a function of the
time at which the light arrived, for a simulated inverted
lollipop event.
As shown in the figure the inverted lollipop event
produces a unique topology consisting of a shower
followed by a track inside the detector compared to
typical muons that produce simple track-like signatures
with smooth light deposition along its track length. A
set of simple variables based on the differences in the
topologies can select (inverted-)lollipop and double-bang
events while removing track-like muon backgrounds.
One of the variables invented for this purpose is maxi-
mum “current ratio,” I
R,max
. This variable is defined as
a ratio of the two currents, I
R
, themselves defined as
the amount of charge per unit time, inside and outside
a sliding time window, as shown in Fig. 2. When the
sliding time window passes through the event’s time-
ordered hits, I
R
is calculated, and its maximum value
I
R,max
is used as a cut variable. For signal events,
I
R,max
is expected to be greater than 1 but for simple
track-like muon backgrounds it is expected to be closer
to 1.
However, energetic muons can leave a big shower
from bremsstrahlung during their passage through the
fiducial volume so that these events could survive the
I
R,max
cut due to the similarity of the event topology.
To remove these energetic muon events another variable
called the “local current,” I
L
, is used. The I
L
is defined
as the current calculated in three equally-spaced time
regions of an event’s time-ordered hits. Of the first and
last third, we choose the one with the largest IL as the
selection criterion. We intentionally ignore the middle
third to help reject energetic muons that have an accom-
panying bremsstrahlung somewhere in the middle of its
track length. This variable showed good discrimination
power between signal and background events.
III. CUTS AND EFFICIENCIES
So far the cuts have been developed in six distinct
levels after application of a trigger and online filter.
The trigger, denoted “SMT8,” applies a simple majority
condition of 8 hits within 5 µs to the data as it is
acquired. The online filter is a logical OR of IceCube’s
cascade and Extremely High Energy (EHE) filters. The
cascade filter is designed to select events which satisfy
minimum condition of “cascade-like” events [12]. For
the EHE filter, a minimum of 80 hits were required.
The level 0 and 1 (L0, L1) cuts are designed to
remove track-like muon backgrounds. The L2 cuts are
designed mainly to remove energetic muons accompa-
nying bremsstrahlung in the middle of their passage.
The L3 cuts are designed to remove downwards-going
events, and the L4 cuts are designed to select events
that look more “cascade-like” than “track-like” using
different variables from those used in L0 and L1. The
L5 cuts are designed to remove events which are not
sufficiently contained inside the detector. Fig. 3 shows
the relative efficiencies at each cut level for signal and
background events.
0
1
2
3
4
5
6
7
log10(Efficiencies w.r.t. SMT8)
-8
-7
-6
-5
-4
-3
-2
-1
0
lolliPop
doubleBang
nuTau E-2
nuMu E-2
nuE E-2
prompt atm nu
bartol atm nu
Single mu
Double mu
Tau Cut Efficiencies
Fig. 3.
High energy ν
τ
selection cut efficiencies w.r.t. SMT8 for
lollipop, double-bang, astrophysical ν
τ
, ν
µ
and ν
e
, and atmospheric
background events for IC-22. The numbers from 0 to 8 on the x-axis
represent SMT8, the online filter, and L0, L1, L2, L3, L4 and L5 cuts,
respectively.
As shown in Fig. 3, lollipop and double-bang events
keep the highest efficiencies because they are specially
selected from all generated ν
τ
’s so that they are well
contained within the IC-22 detector (“golden events”).
Next highest efficiency group is astrophysical neutrinos
of all flavors. Note that, for the astrophysical ν
τ
’s, they
are unbiased data samples including all generated ν
τ
events unlike the “golden events”. Atmospheric neutrino
backgrounds, prompt and conventional, come next to as-
trophysical neutrinos. Atmospheric muon backgrounds,
single and coincident, show the lowest efficiencies even
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
though they run out of statistics from L3 cut which need
statistical improvement in near future.
It is good that the cuts developed so far segregate
well between astrophysical signal and atmospheric back-
ground events. Within astrophysical neutrinos, however,
the cuts are almost equally sensitive to all flavors so that
we lose the discrimination power for the specific flavor
under study, ν
τ
. This is due to the fact that the cuts are
still quite general. To better distinguish astrophysical ν
τ
from astrophysical ν
µ
and ν
e
, which can more easily
mimic lollipop than double-bang signatures, the future
direction this analysis will take is to focus exclusively
on the double-bang topology.
IV. DOUBLE-BANG SEARCH
Fig. 4 shows charge per DOM distribution as a
function of DOM hit time for a simulated double-bang
event produced from a primary neutrino energy of 50
PeV. As shown in the figure the double-bang event has
two showers separated by a track (403 m long).
time (ns)
10400 10600 10800 11000 11200 11400 11600 11800 12000 12200
Q per DOM
0
200
400
600
800
1000
1200
1400
1600
1800
2000
Fig. 4. Charge (number of photo-electrons) per DOM distribution as
a function of DOM hit time (ns) for a simulated double-bang event.
The first peak corresponds to a shower from ν
τ
CC interaction and the
second peak from the tau decay. Only hits arriving within 900 ns of
residual time were used. (Residual time is the time difference between
expected and actual photon arrival time.)
Using the local current variable described above,
requiring a large I
L
in both the first and last parts
of the event, double-bang events can be selected.
However, very energetic muons which produce two
bremsstrahlungs in sequence could survive this cut.
Further cuts are still being developed and evaluated.
V. CONCLUSION
Nature produces high energy neutrinos and they can
be observed in all flavors. We try to detect especially
high energy ν
τ
’s which can leave unique signatures
inside the IceCube detector. So far our approach is rather
simple but we will continue investigate the IceCube
potential especially for double-bang type events.
REFERENCES
[1] F. Halzen and D. Hooper, Rept. Prog. Phys. 65, 1025 (2002).
[2] A. Neronov and M. Ribordy, arXiv:0905.0509 (2009).
[3] S. Pakvasa, Nucl. Phys. B137 295 (2004).
[4] D. Meloni and T. Ohlsson, Phys. Rev. D75, 125017 (2007).
[5] J.G. Learned and S. Pakvasa, Astropart. Phys. 3, 267 (1995).
[6] D. F. Cowen, J. Phys. 60, 227 (2007).
[7] E. Bugaev et al., Astropart. Phys. 21, 491 (2004).
[8] T. DeYoung, S. Razzaque and D. F. Cowen, J. of Phys. 60, 231
(2007).
[9] L. Pasquali and M. H. Reno, Phys. Rev. D59, 093003 (1999).
[10] R. Enberg et al., Phys. Rev. D78, 043005 (2008).
[11] M. Ribordy, Nucl. Inst. Meth. A574, 137 (2006).
[12] J. Kiryluk, “First search for extraterrestrial neutrino-induced
cascades with IceCube” in these proceedings.
.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Search for quantum gravity with IceCube and high energy
atmospheric neutrinos
Warren Huelsnitz
∗
and John Kelley
†
for the IceCube Collaboration
‡
∗
Department of Physics, University of Maryland, College Park, MD 20742, USA
†
Department of Physics, University of Wisconsin, Madison, WI 53706, USA
‡
See the special section of these proceedings
Abstract
. We present the expected sensitivity of
an analysis that will use data from the IceCube
Neutrino Observatory to search for distortions in
the energy or directional dependence of atmospheric
neutrinos. Deviations in the energy and zenith angle
distributions of atmospheric neutrinos due to Lorentz
invariance violation or quantum decoherence could
be a signature of quantum gravity in the neutrino
sector. Additionally, a periodic variation as a func-
tion of right ascension is a possible consequence
of a Lorentz-violating preferred frame. We use a
likelihood method to constrain deviations in the
energy and zenith angle distributions and a discrete
Fourier transform method to constrain a directional
asymmetry in right ascension. In the absence of new
physics, the likelihood method can also constrain
conventional and prompt atmospheric neutrino flux
models. Results from a similar analysis using data
from the AMANDA-II detector are also discussed.
Keywords
: quantum gravity, Lorentz violation, at-
mospheric neutrinos
I. INTRODUCTION
Physicists have so far been unable to reconcile quan-
tum field theory and general relativity into a coherent
theory of quantum gravity (QG). Numerous approaches
are in development, and common to many is the possibil-
ity that Lorentz invariance is violated at extremely small
distance scales (high energy scales), due to a discrete
structure of spacetime or an invariant minimum length
scale. Interactions with a spacetime foam, or virtual
black holes, may also induce quantum decoherence in
which pure quantum states evolve into mixed states [1].
Neutrinos, lacking any gauge interactions other than
weak, and having extremely high Lorentz factors, are
sensitive probes of these effects. Violation of Lorentz
invariance (VLI) can induce a number of flavor-changing
signatures in neutrinos, including oscillations with
unique energy dependencies or directional asymmetries
due to a Lorentz-violating preferred frame. Quantum
decoherence (QD) can also result in flavor-changing
effects that depend upon the neutrino energy.
Atmospheric neutrinos are produced in the decay
chains of particles resulting from the interaction of cos-
mic rays with the earth’s atmosphere [2,3]. The IceCube
neutrino telescope [4], currently under construction in
the glacial ice at the South Pole, detects the Cherenkov
radiation emitted by charged particles produced by neu-
trino interactions in the ice or rock. IceCube has already
collected a large sample of atmospheric muon neutrinos
in the energy range of 100 GeV to a few tens of TeV,
and can search for deficits caused by possible QG effects
such as VLI or QD.
We will first review the phenomenological models of
QG to be tested. Then, we will discuss event selection
and the observables used for the analysis, followed
by a discussion of the likelihood and discrete Fourier
transform (DFT) methods we use. Finally, results from
the AMANDA-II detector, and expected sensitivity of
the 40-string configuration of IceCube will be discussed.
II. PHENOMENOLOGY
A. Violation of Lorentz Invariance
For VLI models, we consider the case of a flavor-
dependent dispersion relation, or, equivalently, flavor-
dependent limiting velocities that differ from the speed
of light [5, 6]. Further, we make the simplifying as-
sumption of a two neutrino model in which the new
eigenstates are characterized by a mixing angle
ξ
and
a phase
η
. This leads to a muon neutrino survival
probability of the form:
P
ν
µ
→ν
µ
= 1 sin
2
2Θ sin
2
?
∆m
2
L
4E
ℜ
?
.
E
is the neutrino energy and
L
the propagation distance
for the atmospheric neutrino, which is a function of
zenith angle.
Θ
is the effective mixing angle, given by
sin
2
2Θ =
sin
2
2θ + R
2
sin
2
2ξ
+ 2R sin 2θ sin 2ξ cos η)
?
ℜ
2
.
The effective oscillation wavelength is
ℜ =
1 + R
2
+ 2R [cos2θ cos2ξ
+ sin 2θ sin 2ξ cos η])
1/2
.
R
is the ratio between the VLI oscillation wavelength
and the mass-induced oscillation wavelength:
R =
∆c
c
E
2
4E
∆m
2
.
∆c/c
is the velocity splitting between eigenstates. The
VLI oscillation length can be generalized to integral
2
W. HUELSNITZ
et al.
SEARCH FOR QUANTUM GRAVITY
powers of neutrino energy:
∆c
c
LE
2
→ ∆δ
LE
n
2
.
Since mass-induced oscillations are suppressed in the
energy range for this analysis, we can make a simplify-
ing assumption and set
η = π/2
so that
cos η = 0
. We
then have two physics parameters:
∆c/c
and
sin
2
2ξ
.
B. Decoherence
For quantum decoherence, we use a full three-neutrino
model, in which the muon neutrino survival probability
can be written [7, 8]:
P
ν
µ
→ν
µ
=
1
3
+
1
2
?
e
γ
3
L
cos
4
θ
23
+
1
12
e
γ
8
L
(1 3 cos 2θ
23
)
2
+ 4e
(γ
6+
γ
7)
L
2
cos
2
θ
23
sin
2
θ
23
×
?
cos
L
√
m
?
2
?
+ sin
L
√
m
?
2
?
(γ
6
γ
7
)
?
√
m
??
,
(1)
with m ≡
?
?
?
(γ
6
γ
7
)
2
∆m
2
23
?
E
?
2
?
?
?
.
To limit the number of physics parameters, we assume
that
γ
3
= γ
8
and
γ
6
= γ
7
. The
γ
i
can be generalized to
integral powers of neutrino energy:
γ
i
→ γ
∗
i
?
E
GeV
?
n
GeV.
The units of
γ
∗
i
are then GeV
n+1
.
C. Directional Asymmetry
The location of IceCube at the South Pole is ideally
suited to search for a sidereal variation in the flux of
atmospheric neutrinos. Right ascension (RA) is synony-
mous with sidereal phase, and azimuthal asymmetries
in the detector average out over a year. We use a
two-neutrino model derived from the Standard Model
Extension (SME), known as the vector model [9]. This
model predicts a survival probability that depends on the
direction of neutrino propagation:
P
ν
µ
→ν
µ
= 1 sin
2
?
L
?
(A
s
)
µτ
sin (α + ϕ
0
)
+ (A
c
)
µτ
cos (α + ϕ
0
)
??
.
α
is the RA of the neutrino and
ϕ
0
is the offset between
the origin of our coordinate system and a ’preferred’
direction.
A
s
and
A
c
are functions of neutrino energy,
E
,
neutrino direction unit vectors,
N
ˆ
, and four coefficients
from the SME, the
(a
L
)
µ
and
(c
L
)
µν
:
(A
s
)
µτ
= N
ˆ
Y
a
X
L
2Ec
TX
L
?
N
ˆ
X
a
Y
L
2Ec
TY
L
?
,
(A
c
)
µτ
= N
ˆ
X
a
X
L
2Ec
TX
L
?
N
ˆ
Y
a
Y
L
2Ec
TY
L
?
.
Typically, we assume
a
X
L
= a
Y
L
and
c
T X
L
= c
TY
L
in
the analysis. Additionally, while constraining the
a
L
coefficients, the
c
L
are set to 0, and when constraining
the
c
L
coefficients, the
a
L
are set to 0.
III. EVENT SELECTION
We are interested in upgoing atmospheric
ν
µ
events,
and the main background is cosmic-ray muons. Even
after an initial event selection based on zenith angle,
the event sample is dominated, by several orders of
magnitude, by misreconstructed cosmic-ray muons. This
background is further reduced by event selection cuts
that are based on track quality parameters and on fits
to alternative track hypotheses. Alternative track hy-
potheses include downgoing versus upgoing tracks, and
coincident muon events.
Since the remaining background contamination is dif-
ficult to model with simulation, we require an essentially
pure neutrino sample. Final event selection to achieve
this level of purity is done using a Boosted Decision
Tree (BDT) [10]. The event sample for one year of data
from 40-string IceCube is expected to be about 20,000
upgoing neutrinos, with zenith angles between 90 and
180 degrees, and neutrino energies from 100 GeV to
about 30 TeV.
IV. FLUX MODELING AND BINNING
Simulated events are weighted by their contribution
to conventional [2, 3] and prompt [11–13] atmospheric
neutrino flux models. These weights are then multiplied
by the applicable oscillation or decoherence survival
probability. Nuisance parameters are used in the likeli-
hood analysis to account for the more significant theoret-
ical and experimental uncertainties in flux normalization,
spectral index, and zenith angle distribution. Individual
events are thus weighted as follows:
w = A {Bw
conv
+ Cw
prompt
} P
ν
µ
→ν
µ
,
where
A = ε
?
E
1 TeV
?∆
γ
?
1+2α
?
cos θ
Z
+
1/2
??
,
B =
?
1 + 2α
c
?
cos θ
Z
+
1/2
??
, and
C = A
p
?
E
5 TeV
?∆
γ
p
.
ε
accounts for theoretical and experimental uncer-
tainties in the overall flux normalization, such as ice
model uncertainties, optical module (OM) sensitivity
uncertainty, interaction rate uncertainties, reconstruction
errors, etc.
∆γ
accounts for the uncertainty in the
primary cosmic ray slope as well as the impact of OM
and ice model uncertainties on the observed spectral
index.
α
accounts for the impact of OM and ice model
uncertainties on the zenith angle tilt of the observed flux.
α
c
accounts for theoretical uncertainty in the zenith-
angle tilt of the conventional atmospheric neutrino flux,
primarily due to uncertainty in the pion to kaon ratio.
A
p
and
∆γ
p
account for theoretical uncertainty in the
magnitude and spectral index of the prompt atmospheric
neutrino flux, primarily due to uncertainties in charm
production cross sections and fragmentation functions.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
θ
Z
is the zenith angle of the neutrino, and
P
ν
µ
→ν
µ
is
the oscillation or decoherence survival probability. The
particular forms of the spectral tilt and zenith angle tilt
equations were chosen to minimize the impact on overall
normalization as the correction factors are varied.
Events are binned in
log
10
(dE/dX)
and
cos(θ
Z
)
for tests of VLI, decoherence, and atmospheric flux
models, and in RA for the vector model.
dE/dX
, with
units of GeV m
1
, is the average energy loss per unit
propagation length of a muon that would produce the
detected amount of light, and serves as an estimator
for the original neutrino energy. The energy resolution
is about 0.3 on a log scale, reducing sensitivity to
VLI effects by a factor of two as compared to perfect
energy resolution. Histograms for
log
10
(dE/dX)
and
cos(θ
Z
)
are 10 x 10, and range from -1.9 to 1.1 for
log
10
(dE/dX)
and -1 to 0 for
cos(θ
Z
)
. 32 bins, from
0 to 360
◦
, are used for RA, and include events in the
declination band 0 to -30
◦
(zenith band 90 to 120
◦
).
Since some of the
ν
µ
are assumed to oscillate to
ν
τ
,
ν
τ
-induced muons are included in the simulation chain.
Finally, bin counts for toy Monte Carlo (MC) histograms
are varied according to Poisson distributions.
V. LIKELIHOOD RATIO TEST
To determine the compatibility of various new physics
hypotheses with the data and identify acceptance re-
gions, we use a likelihood-ratio test and the ordering
principle of Feldman and Cousins [14]. The signal we
are looking for is a distortion, or a warping, of the
event counts in the energy-zenith plane. A likelihood
analysis takes advantage of this shape of the distribution
and provides a convenient way to include systematic
uncertainties in the overall normalization and shape of
the atmospheric neutrino flux. Systematic uncertainties
are included using the nuisance parameters discussed
above, and the profile construction method [15,16]. The
likelihood function is:
L ({n
ij
}|{µ
ij
(θ
r
, θ
s
)}) =
?
i,j
µ
n
ij
ij
n
ij
!
e
µ
ij
.
n
is the binned toy MC or real data and
µ
is the
prediction.
θ
r
represents the physics parameters and
θ
s
the nuisance parameters. In practice, this function is
maximized by finding the minimum of the negative log
of the likelihood, using the Minuit2 package in ROOT
[17]. The test statistic is the likelihood ratio,
R = 2ln
L
0
L
ˆ
.
where
L
0
is the maximum likelihood, i.e., the best fit
to the data or the toy MC histogram, with physics
parameters held fixed and nuisance parmaters allowed
to vary over the ranges of their uncertainties.
L
ˆ
is the
maximum likelihood when physics as well as nuisance
parameters are allowed to vary.
In the absence of new physics effects, the likelihood
method will be used to evaluate theoretical uncertainties
in conventional and prompt atmospheric neutrino flux
models. For these analyses, experimental and theoretical
uncertainties are split into separate nuisance parameters,
and those associated with theoretical uncertainties in
the conventional and/or prompt neutrino flux become
physics parameters.
VI. DFT ANALYSIS
Neutrino oscillations in the vector model depend on
the x and y components of the neutrino propagation di-
rection. Hence, a phase angle specifying the offset from
a preferred direction is required. To conduct a model-
independent search for a sidereal signal independent of
an arbitrary assumption about this phase angle, we use a
DFT analysis. This analysis is done in two stages and has
been adapted from a similar analysis performed with the
MINOS detector to search for a directional dependence
[18]. In the first stage, the data is checked for consistency
with the hypothesis of no sidereal signal. In the second
stage, constraints are placed on the SME coefficients of
the vector model.
First, a large number of toy experiments are performed
in which the right ascensions of all events in the data
are randomly redistributed. The power spectral densities
(PSDs) in the
n = 1
to
n = 4
components of a DFT are
computed for each of these ’noise-only’ toy experiments.
The corresponding frequencies are
n/T
⊕
, where
T
⊕
is
a sidereal day. The PSDs of the true data histogram are
then computed and compared to the range of PSDs from
the toy experiments. This indicates whether the data is
consistent with the hypothesis of no sidereal signal.
In the vector model, muon neutrino survivial probabil-
ity varies with RA with a modulation frequency of
4/T
⊕
.
To constrain the vector model, we look for an excess of
power in the
n = 4
harmonic. The energy and zenith
angle distributions are modeled using simulated events
and the best-fit nuisance parameter values from the data.
A large number of toy MC experiments are created,
using these best-fit values. The physics parameters of
the vector model are then increased, and the simulated
events reweighted accordingly, until a PSD greater than
the 99th percentile of the PSDs from the noise-only toy
experiments is obtained. The values found in each of
these trials are then averaged to find the sensitivity of
this analysis given the data and the absence of a signal.
VII. RESULTS AND EXPECTATIONS
In a previous analysis of atmospheric muon neutrino
events collected from 2000 to 2006 with the AMANDA-
II detector [19], the data were consistent with the Stan-
dard Model, and upper limits on QG parameters were
set. A VLI upper limit at the 90% CL was found of
∆c/c < 2.8 × 10
27
for VLI oscillations proportional to the neutrino energy.
A QD upper limit at the 90% CL was found of
γ
∗
< 1.3 × 10
31
GeV
1
4
W. HUELSNITZ
et al.
SEARCH FOR QUANTUM GRAVITY
sin
2
(2ξ)
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1
c/c)
Δ
(
10
log
-28
-27.5
-27
-26.5
-26
-25.5
-25
-24.5
-24
Fig. 1: VLI Model, 90% CL curves. Dashed: AMANDA-
II [19]. Dotted: SuperK + K2K [5]. Solid with hash
marks: expected 80-string IceCube sensitivity [6]. Solid:
expected 40-string IceCube sensitivity (preliminary).
)
7
,
γ
6
(γ
10
log
-34 -33.5 -33 -32.5 -32 -31.5 -31 -30.5 -30
)
8
γ
,
3
γ
(
10
log
-33
-32
-31
-30
-29
-28
-27
-26
-25
Fig. 2: Decoherence model, 90% CL curves. Dashed:
AMANDA-II [19]. Solid: expected 40-string IceCube
sensitivity (preliminary). The black box indicates region
scanned in AMANDA-II analysis.
for decoherence effects proportional to
E
2
and with all
γ
i
assumed equal. For one year of data from 40-string
IceCube, we expect about a factor of three improvement:
∆c/c < 9.0 × 10
28
and
γ
∗
< 2.5 × 10
32
GeV
1
.
Figure (1) shows 90% CL curves for the
n = 1
VLI
model. Included are the AMANDA-II analysis, SuperK
and K2K [5], and expected sensitivity for ten years of
data from the full, 80-string, IceCube detector [6]. Also
included is the 90% CL curve expected for the 40-string
IceCube detector, based on a preliminary treatment of
nuisance parameters. Figure (2) shows 90% CL curves
for the
n = 2
decoherence model from the AMANDA-
II analysis, and the expected sensitivity for 40-string
IceCube (also preliminary).
For the vector model, the sensitivity of the 40-string
IceCube detector, at the 99% CL, is expected to be:
a
X
L
= a
Y
L
< 2.0 × 10
23
GeV,
c
T X
L
= c
TY
L
< 6.6 × 10
27
.
These limits are three orders of magnitude lower for the
a
L
terms and four orders of magnitude lower for the
c
L
terms than the limits reported in [18]. This is due to
the longer baseline of atmospheric neutrinos, and higher
energy reach of IceCube.
Data from AMANDA-II were used to find that the
best-fit flux
Φ
for conventional atmospheric neutrinos,
starting with the flux
(Φ
Barr
)
of reference [2] is:
Φ = (1.1 ± 0.1)
?
E
640 GeV
?0
.056
Φ
Barr
.
This likelihood methodology will also be used with
IceCube data to constrain conventional and prompt at-
mospheric neutrino flux models.
VIII. CONCLUSIONS
Data from IceCube’s 40-string configuration will im-
prove constraints on VLI and decoherence models be-
yond that achieved with AMANDA-II. Additionally, it
will significantly improve constraints on a certain class
of direction-dependent oscillation models.
The IceCube detector will be able to provide improved
constraints on various models of quantum gravity and
atmospheric neutrino flux models as the detector grows
to its final design configuration and as data collection
continues in the following years. The likelihood method
and the flux weighting discussed above provide flexi-
bility to adjust nuisance parameter ranges as IceCube
systematic uncertainties become better constrained.
REFERENCES
[1] S.W. Hawking,
Commun. Math. Phys.
, 87:395 (1982).
[2] G.D. Barr
et al.
,
Phys. Rev. D.
, 70:023006 (2004).
[3] M. Honda
et al.
,
Phys. Rev. D.
, 75:043006 (2007).
[4] A. Karle,
arXiv:0812.3981v1
.
[5] M.C. Gonzalez-Garcia and M. Maltoni,
Phys. Rev. D.
, 70:033010
(2004).
[6] M.C. Gonzalez-Garcia, F. Halzen and M. Maltoni,
Phys. Rev. D.
,
71:093010 (2005).
[7] A.M. Gago
et al.
,
arXiv:hep-ph/0208166v1
.
[8] L. Anchordoqui and F. Halzen,
Annals Phys.
, 321:2660 (2006).
[9] V.A. Kostelecky and M. Mewes,
Phys. Rev. D.
, 70:076002
(2004).
[10] A. Hocker
et al.
,
arXiv:physics/0703039v4
.
[11] R. Enberg, M.H. Reno and I. Sarcevic,
Phys. Rev. D.
, 78:043005
(2008).
[12] A.D. Martin, M.G. Ryskin and A.M. Stasto,
Acta Phys. Polon.
B
, 34:3273 (2003).
[13] G. Fiorentini, A. Naumov and F.L. Villante,
Phys. Lett. B
,
510:173 (2001).
[14] G.J. Feldman and R.D. Cousins,
Phys. Rev. D.
, 57:3873 (1998).
[15] G.J. Feldman, ”Multiple measurements and parameters in the
unified approach,” Workshop on Confidence Limits, Fermilab
(2000).
[16] K. Cranmer, in
Statistical Problems in Particle Physics,
Astrophysics and Cosmology: Proceedings of PHYSTAT05
,
L. Lyons and M.
U¨nel
, eds., (Univ. of Oxford, U.K.,
2005);
arXiv:physics/0511028v2
.
[17] R. Brun and F. Rademakers,
Phys. Res. A
, 389:81 (1997).
[18] P. Adamson
et al.
,
Phys. Rev. Lett.
, 101:151601 (2008).
[19] R. Abbasi
et al.
,
arXiv:0902.0675v1
.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
A First All-Particle Cosmic Ray Energy Spectrum From IceTop
Fabian Kislat
∗
, Stefan Klepser
†
, Hermann Kolanoski
‡
and Tilo Waldenmaier
‡
for the IceCube Collaboration
∗
∗
DESY, D-15738 Zeuthen, Germany
†
IFAE Edifici Cn., Campus UAB, E-08193 Bellaterra, Spain
‡
Institut fu¨r Physik, Humboldt-Universita¨t zu Berlin, D-12489 Berlin, Germany
∗
See special section of these proceedings
Abstract
. The IceTop air shower array is presently
under construction at the geographic South Pole as
part of the IceCube Observatory. It will consist of
80 stations which are pairs of Ice-Cherenkov tanks
covering an area of
1 km
2
. In this paper a first
analysis of the cosmic ray energy spectrum in the
range
2 · 10
15
eV
to
10
17
eV
is presented using data
taken in 2007 with 26 IceTop stations. The all-particle
spectrum has been derived by unfolding the raw
spectrum using response matrices for different mass
compositions of the primaries. Exploiting the zenith
angle dependence of the air shower development we
have been able to constrain the range of possible
composition models.
Keywords
: IceTop - Energy - Spectrum
I. INTRODUCTION
The IceTop air shower array is currently under con-
struction as part of the IceCube Observatory at the
geographic South Pole [1], [2]. Its 80 detector stations
will cover an area of about
1km
2
at an atmospheric
depth of
680 g/cm
2
. Each station consists of two ice
filled
1.8 m
diameter tanks each equipped with two
Digital Optical Modules (DOMs) [3] as photon sensors
(the same as used by IceCube in the deep ice). The
photomultipliers inside the two DOMs are operated at
different gains to increase the dynamic range.
Air showers are detected via the Cherenkov light
emitted by charged particles inside the ice tanks. The
light intensity is recorded by an ‘Analog Transient
Waveform Digitizer’ (ATWD) at a
300 MSPS
sampling
rate. When the signal inside a DOM crosses a threshold a
‘local coincidence’ signal is sent to neigbouring DOMs.
Data taking is started when a local coincidence signal is
received from the high gain DOM in the other tank of
a station within
250 ns
. This ensures that the two tanks
of a station always trigger together. This analysis only
uses events where at least five stations have triggered.
The analysis uses the signal sizes, obtained by inte-
grating the waveforms, and the arrival times, determined
from the leading edge of a waveform. Because of its high
altitude of
2835 m
, IceTop is located close to the shower
maximum for showers of energies between
10
15
eV
and
10
17
eV
(about
550 g/cm
2
to
720 g/cm
2
). Therefore,
the signals measured by IceTop are dominated by the
electromagnetic component of the air showers.
Fig. 1. An example of a lateral signal size distribution from a single
event in IceTop. Each data point corresponds to the signal in terms of
Vertical Equivalent Muons measured in an IceTop tank. The curve is
a fit of the lateral distribution function (1).
II. ENERGY RECONSTRUCTION
Generally, the shower direction can be reconstructed
from the arrival times of signals, while the core position
and the primary energy of the air shower are inferred
from the lateral distribution of signal sizes. These signal
sizes are calibrated with signals from vertical muons to
eliminate differences between the tanks. Signals given in
units of Vertical Equivalent Muons (VEM) are a detector
independent measure of the intensity of an air shower at
the position of a tank.
A first estimate of the shower core position is obtained
by finding the signal center-of-gravity which is defined
as the average of the tank positions weighted by the
square-root of the signal size. The shower direction is
obtained by fitting a plane to the measured signal times.
These two first guesses are used as an input to a more
accurate iterative maximum likelihood fit.
In the latter procedure a lateral distribution function
is fitted to the measured signal sizes and an arrival
time distribution is fitted to the signal times. The lateral
distribution function [4] has been obtained from air
shower and detector simulations and corresponds to a
second order polynomial on a double logarithmic scale:
S (r) = S
ref
?
r
R
ref
?
β
ref
κ log
10
(r/R
ref
)
(1)
2
FABIAN KISLAT
et al.
ICETOP ENERGY SPECTRUM
S
ref
is the signal expectation value at the reference ra-
dius
R
ref
from the shower axis,
β
ref
is a slope parameter
related to the shower age and
κ
is a curvature parameter
of the lateral signal distribution function. Based on a
study of the stability of fit results the reference radius
has been fixed to
R
ref
= 125 m
which also corresponds
to the IceTop grid spacing. An example is shown in
Figure 1. The likelihood function used in the fitting
procedure is based on a study of signal fluctuations and
also takes into account stations that do not trigger [4].
Taking into account the spatial curvature of the shower
front, which is assumed to have a fixed profile [5], the
arrival time distribution
t(r)
is given by:
t(r) = 19.41 ns
e
(
r
118.1m
)
2
1
?
4.823 · 10
4
ns
m
2
r
2
(2)
This
t(r)
indicates the time delay of a signal at distance
r
from the shower axis with respect to a planar shower
front through the shower core and perpendicular to the
shower axis. Using this shower front parametrisation
improves the direction resolution compared to the plane
fit first guess result.
The complete log-likelihood function, therefore, has
three terms,
L = L
hit
+ L
nohit
+ L
time
(3)
L
hit
is based on the log-normal distribution of signal
sizes obtained from the study of signal size fluctuations,
L
time
is based on an assumed Gaussian distribution of
arrival times and
L
nohit
is defined by
L
nohit
=
?
log
1 P
A
hit
P
B
hit
?
(4)
P
i
hit
is the probability that tank
i
of a station triggers
given the signal expectation of the fit at that iteration
step, given the shower core position and direction. The
likelihood accounts for the ‘local coincidence’ condition
which requires both tanks of a station to trigger before
signals are transmitted. The sum in (4) runs over all
stations that did not trigger.
The fit procedure is divided into several steps to
improve the stability. At first the direction is fixed
to the initial first guess value and only the lateral
signal distribution (1) is fitted. In a second iteration
the direction is also varied but the parameters of the
lateral fit are limited to a
±3σ
range around the values
obtained in the first step. Finally a last iteration with
fixed direction is performed. The slope parameter of the
lateral distribution function is limited throughout this
procedure to
1.5 ≤ β
125
≤ 5
.
Using the results of the fit a first-guess energy is
determined as a function of shower size and the zenith
angle. As a measure of the shower size, the signal
expectation value
S
?log r?
, at the distance
?log r?
from
the core is used, where
?log r?
is the average of the
logarithmic distance of tanks to the shower axis. The size
parameter
S
?log r?
has been found to be least correlated
to the other fit parameters (in general
S
125
and the slope
10
(E/PeV)
log
-1
-0.5
0
0.5
1
1.5
2
-1
sr
-1
s
-2
dI/dlg(E) / m
-10
10
-9
10
-8
10
-7
10
-6
10
-5
10
0°<θ<30°
30°<θ<40°
40°<θ<46°
preliminary
Fig. 2.
Raw IceTop energy spectrum divided into three different
zenith angle ranges (see text for details).
β
125
are correlated). From air shower simulations an
energy estimator
E
rec
(S
?log r?
, θ)
for any given
?log r?
based on a proton hypothesis has been derived.
To ensure the quality of the reconstructed data several
quality criteria are applied:
•
Only showers with a zenith angle
θ < 46
◦
are con-
sidered restricting the analysis to a well understood
zenith angle range;
•
The value of the slope parameter must be
1.55 ≤
β
125
< 4.95
removing all showers for which the
likelihood fit ran into the limits on this parameter;
•
The uncertainty on the shower core position must
be less than
20m
;
•
The core position from the likelihood fit and the
center-of-gravity first guess must be inside the
IceTop array,
50m
away from the array border.
Furthermore, the station with the largest signal must
not be on the border of the IceTop array.
The last item is to exclude showers with a core outside
the array which have a high probability to be mis-
reconstructed. Using the likelihood fit result alone is
not sufficient because the fit tends to reconstruct these
cores inside the array. These strict requirements on the
core reconstruction are necessary because the position
of the core directly influences the interpretation of the
fitted lateral distribution. Data was taken between June
1st 2007 and October 31st 2007 with the 26 IceTop
stations operated at that time. The filter level data sample
contained 11 262 511 events. After all of the above cuts,
4 131 343 events remained.
The raw energy spectrum, that is the distribution of the
reconstructed energies
E
rec
without any further correc-
tions or unfolding applied, but after the abovementioned
cuts, is shown in Figure 2. The data is split into three
zenith angle ranges,
0 ≤ θ < 30
◦
,
30
◦
≤ θ < 40
◦
and
40
◦
≤ 46
◦
.
III. AIR SHOWER SIMULATIONS
The relation between the measured signals and the
properties of the primary particle can only be obtained
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
true
/PeV)
10
(E
log
-1
-0.5
0
0.5
1
1.5
2
)
true
(E
10
log
-
)
reco
(E
10
log
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
0.4
Proton
°
≤θ<
30
°
0
°
≤θ<
40
°
30
°
≤θ<
46
°
40
(a) Primary protons
true
/PeV)
10
(E
log
-0.5
0
0.5
1
1.5
2
)
true
(E
10
log
-
)
reco
(E
10
log
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
0.4
Iron
°
≤θ<
30
°
0
°
≤θ<
40
°
30
°
≤θ<
46
°
40
(b) Primary iron
Fig. 3. Systematic energy shift for (a) protons and (b) iron when using the energy estimator
E
rec
as a function of the true energy of a particle.
This energy mis-reconstruction is one of the parameters describing the energy response of the IceTop detector. The strong increase towards low
primary energies is a threshold effect as explained in the text. Since
E
rec
is based on the proton assumption the energy mis-reconstruction for
protons is small while that for primary iron larger and shows a clear zenith angle dependance.
from simulations of air showers and the detector. There-
fore, a Monte Carlo shower library containing 98 760
proton and iron showers with primary energies between
100 TeV
and
100 PeV
has been generated with COR-
SIKA [6]. The hadronic interaction models SIBYLL [7]
and Fluka 2006.3b [8] and the CORSIKA atmosphere
model No. 14 (South Pole, Dec. 31, 1997, MSIS-90-E)
were used.
IV. UNFOLDING THE SPECTRUM
The energy estimator
E
rec
does not fully account
for the angular and energy dependence of the detec-
tor acceptance and smearing. This biases the result
especially in the threshold region where the detection
efficiency increases strongly with energy. In addition,
the calculation of
E
rec
assumes proton primaries and
one specific atmosphere profile (CORSIKA atmosphere
12, South Pole July 01, 1997). A different atmosphere
profile or a different composition of primary particles
can result in a different energy spectrum. All these
effects can be taken into account by unfolding the raw
energy spectrum.
The unfolding algorithm uses a response matrix which
contains the probability that an incident primary particle
with true energy
E
true
will be assigned the energy
E
rec
.
Response matrices are generated from CORSIKA air
shower simulations for proton and iron primaries in three
different zenith angle ranges. Response matrices for
mixed compositions are obtained as linear combinations
of these single primary particle responses.
To calculate the response matrices the simulated
data are subdivided into 30 bins of true primary en-
ergies, equidistant in
log
10
(E
true
)
. For each bin the
energy response is obtained from a distribution of
log
10
(E
rec
/E
true
)
of all reconstructed events which is
approximately normally distributed. Non-zero values of
this quantity indicate a wrong energy reconstruction. As
such the mean of this distribution is the ‘energy mis-
reconstruction’, the width is the energy resolution. The
reconstruction efficiency is the ratio between the number
of reconstructed and generated events.
Depending on the zenith angle, the IceTop detector
gets fully efficient at about
2 PeV
with an effective area
of
0.096 km
2
. The energy resolution improves with in-
creasing energy and reaches a value of
0.05
in logarithm
of energy at
10 PeV
primary energy which corresponds
to a statistical uncertainty of roughly
12 %
.
An important property of the detector response is
the ‘energy mis-reconstruction’ shown in Figure 3. It
is the systematic difference between the true primary
energy and the reconstructed energy. The energy mis-
reconstruction shows a strong dependence on the pri-
mary particle mass and thus on the assumed primary
composition. The attenuation of iron induced showers
with increasing slant depth is greater than for proton
showers. This leads to an underestimation of the primary
energy for inclined iron showers when using the proton-
based energy estimator
E
rec
. This is nicely visible in
Figure 3b.
V. PRELIMINARY ENERGY SPECTRUM FROM ICETOP
Based on these response matrices the three raw energy
spectra of Figure 2 are unfolded using an iterative
unfolding method [9]. An unfolding based on three
different composition assumptions has been made: pure
proton, pure iron and the two-component model consist-
ing of proton and iron primaries as motivated in [10].
The results are shown in Figure 4. Above
4 PeV
the spectra have a spectral index ranging from
2.93
to
3.19
, depending on the composition assumption and the
zenith angle band. The absolute normalization varies
between
3.77 · 10
15
m
2
s
1
sr
1
GeV
1
and
7.93 ·
10
15
m
2
s
1
sr
1
GeV
1
at an energy of
10 PeV
.
The abovementioned mis-reconstruction of the energy
of inclined iron showers leads to a relative shift of the
4
FABIAN KISLAT
et al.
ICETOP ENERGY SPECTRUM
PROTON
10
(E/PeV)
log
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
)
-1
sr
-1
s
-2
m
1.5
/ (PeV
1.5
E
×
dI/dlg(E)
-6
10
-5
10
0°<
θ<
30°
30°<
θ
< 40°
40°<
θ
< 46°
preliminary
IRON
10
(E/PeV)
log
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
)
-1
sr
-1
s
-2
m
1.5
/ (PeV
1.5
E
×
dI/dlg(E)
-6
10
-5
10
0°<
θ<
30°
30°<
θ
< 40°
40°<
θ
< 46°
preliminary
TWO-COMPONENT
10
(E/PeV)
log
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
)
-1
sr
-1
s
-2
m
1.5
/ (PeV
1.5
E
×
dI/dlg(E)
-6
10
-5
10
0°<
θ<
30°
30°<
θ
< 40°
40°<
θ
< 46°
preliminary
Fig. 4. Unfolding result for three different assumptions on the primary
compositions: pure protons, pure iron, two-component model [10].
spectra from different zenith bands in the unfolding. In
case of proton primaries the spectrum from the most
vertical zenith bin shows the largest flux and the spec-
trum from the
40
◦
to
46
◦
zenith band shows the lowest
flux. This order is reversed if the primary particles are
assumed to be purely iron. Since there are no indications
of an anisotropy of the arrival directions of cosmic rays
in the energy range under consideration, isotropy has to
be assumed. An isotropic flux, however, means that the
unfolded spectra obtained from different zenith bands
must agree.
VI. CONCLUSIONS AND OUTLOOK
A first analysis of the energy spectrum of cosmic
rays in the range between
2 PeV
and
100 PeV
has been
presented. This analysis uses an unfolding procedure to
account for detector and atmospheric influences as well
as differences in the shower development of different
primary particles. Energy spectra have been obtained for
three different compositions and three different zenith
angle ranges. In case of a pure proton or iron hypothesis
the three different inclination spectra do not agree which
is in conflict with the assumption of an isotropic flux.
Therefore those two pure composition assumptions can
be excluded.
Several issues have not yet been addressed in this
early study. Most importantly, the systematic uncertain-
ties due to the interaction models used in the simula-
tions must be studied. Also the influence of different
atmosphere parametrisations in the simulation must be
analysed and understood. This becomes even more im-
portant when analysing data from a longer period of
time. Currently, in 2009, the detector is running with
nearly three quarters of the full detector.
REFERENCES
[1] T.K. Gaisser
et al. IceTop: The Surface Component of IceCube
,
in
Proc. 28th ICRC
, Tsukuba, Japan, 2003.
[2] T.K. Gaisser
et al. Performance of the IceTop Array
, in
Proc.
30th ICRC
, Me´rida, Mexico, 2007; arXiv:0711.0353v1.
[3] R. Abbasi
et al.
“The IceCube Data Acquisition System: Signal
Capture, Digitization, and Timestamping,” Nucl. Instrum. Meth.
A
601
, 294 (2009).
[4] S. Klepser
et al. Lateral Distribution of Air Shower Signals and
Initial Energy Spectrum above 1 PeV from IceTop
, in
Proc. 30th
ICRC
, Me´rida, Mexico, 2007; arXiv:0711.0353v1.
[5] S. Klepser
et al. First Results from the IceTop Air Shower Array
,
in
Proc. 21st ECRS
, Kosˇice, Slovakia, 2008; arXiv:0811.1671v1.
[6] D. Heck
et al.
, Report
FZKA 6019
(1998), Forschungszentrum
Karlsruhe;
http://www-ik.fzk.de/corsika/physics
description/
corsika phys.html
[7] R. Engel, T.K. Gaisser, P. Lipari, and T. Stanev,
Air shower
calculations with the new version of SIBYLL,
in
Proc. 26th ICRC
,
Salt Lake City, USA, 1999.
[8] A. Ferrari, P.R. Sala, A. Fasso`, and J. Ranft, “FLUKA: a multi-
particle transport code”, CERN-2005-10 (2005), INFN/TC 05/11,
SLAC-R-773.
[9] G. D’Agostini, “A multidimensional unfolding method based on
Bayes’ theorem,” Nucl. Instrum. Meth. A
362
, 487 (1995)
[10] R. Glasstetter
et al. Analysis of electron and muon size spectra
of EAS
, in
Proc. 26th ICRC
, Salt Lake City, USA, 1999.
[11] S. Klepser, “Reconstruction of Extensive Air Showers and Mea-
surement of the Cosmic Ray Energy Spectrum in the Range
of
1 80 PeV
at the South Pole”, Dissertation, Humboldt-
Universita¨t zu Berlin, 2008.
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
Reconstruction of IceCube coincident events and study of
composition-sensitive observables using both the surface and deep
detector
Tom Feusels
∗
, Jonathan Eisch
†
and Chen Xu
‡
, for the IceCube Collaboration
§
∗
Dept. of Subatomic and Radiation Physics, University of Gent, B-9000 Gent, Belgium
†
Dept. of Physics, University of Wisconsin, Madison, WI 53706, USA
‡
Bartol Research Institute, University of Delaware, Newark, DE 19716, USA
§
See the special section in these proceedings
Abstract. The combined information from cosmic
ray air showers that trigger both the surface and
underground parts of the IceCube Neutrino Observa-
tory allows the reconstruction of both the energy and
mass of the primary particle through the knee region
of the energy spectrum and above. The properties
of high-energy muon bundles, created early in the
formation of extensive air showers and capable of
penetrating deep into the ice, are related to the
primary energy and composition.
New methods for reconstructing the direction and
composition-sensitive properties of muon bundles are
shown. Based on a likelihood minimization pro-
cedure using IceCube signals, and accounting for
photon propagation, ice properties, and the energy
loss processes of muons in ice, the muon bundle
energy loss is reconstructed. The results of the high-
energy muon bundle reconstruction in the deep ice
and the reconstruction of the lateral distribution of
low energy particles in the surface detector can be
combined to study primary composition and energy.
The performance and composition sensitivity for both
simulated and experimental data are discussed.
Keywords:
Cosmic
ray
composition,
IceTop/IceCube, high-energy muon bundles
I. INTRODUCTION
The cosmic ray spectrum covers many orders of mag-
nitude in both energy and flux. In the energy range ac-
cessible to the IceCube Neutrino Observatory (∼0.3 PeV
to 1 EeV) the slope of the spectrum remains mostly
constant, except for a feature at around 3 PeV where
the spectrum steepens. This feature is called the knee
of the cosmic ray spectrum, and its origin is unknown.
Proposed explanations include changes in acceleration
mechanisms or cosmic rays leaking from the galactic
magnetic field starting at this energy. Measurement of
mass composition in this range could give clues to
the origin of these cosmic rays. The IceCube detector,
together with the IceTop air shower detector, provides
an opportunity to measure the composition of cosmic
ray particles in the region of the knee and beyond.
The IceTop detector, high on the Antarctic Plain at
an average atmospheric depth of 680 g/cm
2
, consists of
a hexagonal grid of detector stations 125 m apart. Each
station consists of two ice tanks that act as Cherenkov
media for measuring mainly the electromagnetic compo-
nent of cosmic ray air showers. In each tank, two Digital
Optical Modules (DOMs) are deployed, which contain a
10 inch PMT and digital readout and control electronics.
The primary energy can be reconstructed by the IceTop
surface array [1].
Deep below each IceTop station is a string of the
IceCube detector with 60 DOMs evenly spaced between
1.5 and 2.5 km in the ice. Combined, the IceTop and
IceCube arrays can reconstruct the air shower core
position and direction while measuring the shower signal
strength at the surface and the energy deposition of
the high-energy muon bundle in the deep ice. With
these measurements, the energy and mass of the primary
cosmic rays can be reconstructed.
A characteristic difference between showers induced
by light and heavy nuclei for a fixed primary energy
is their number of muons, with higher mass primaries
producing more muon-rich showers. However, the muon
multiplicity is not directly measured by either the IceTop
or IceCube detectors. The deep IceCube detector is
sensitive to Cherenkov light coming mainly from energy
loss processes of high-energy muon bundles. This energy
loss is a convolution of the muon multiplicity of the
shower, the muon energy distribution and the energy
loss of a single muon. If the energy loss behavior of
muon bundles can be reconstructed accurately, it can be
used as a primary mass indicator [2]. In Section IV, the
reconstructed muon bundle track described in Section III
will be used as a seed to reconstruct the muon bundle
energy loss. Simulated data is compared to experimental
data in Section V to examine the detector performance
and its sensitivity to composition.
II. DATA SAMPLE AND SIMULATION
Our experimental data sample was taken from the
month of September, 2008. At that time, the detector
consisted of 40 IceTop stations and 40 IceCube strings.
A total livetime of 28.47 days was obtained by selecting
only runs where both detectors were stable. The events
were processed at the South Pole with a filter which
required at least three triggered IceTop stations and at
2
T. FEUSELS et al. RECONSTRUCTION OF ICECUBE COINCIDENT EVENTS
least 8 triggered IceCube DOMs. After this filter about
3.31·10
6
coincident events remained.
To study the direction and energy reconstruction of
muon bundles in the ice, a large number of proton
and iron showers between 10 TeV and 46.5 PeV were
simulated with the CORSIKA [3] package. SIBYLL
2.1 [4], [5] was used as the high-energy (>80 TeV)
hadronic interaction model, while FLUKA08 [6], [7]
was used as the low energy hadronic interaction model.
For this study, CORSIKA was configured to use a model
of the South Pole atmosphere typical for the month
of July [8]. The showers were simulated according to
an E
−1
spectrum and then reweighted according to an
E
−2.7
spectrum before the knee (at 3 PeV) and an E
−3.0
spectrum after the knee.
The IceCube software environment was used to re-
sample each CORSIKA shower 500 times on and around
the detector, to propagate the high-energy muons through
the ice, and to simulate the detector response and trigger.
The simulation was filtered the same way as data and
yielded about 9.0·10
4
proton and 9.0·10
4
iron events.
III. DIRECTION RECONSTRUCTION
The direction reconstruction by IceTop will be de-
scribed first because it will be used later as a seed for
an IceCube muon bundle reconstruction algorithm.
An initial shower core position and direction is de-
termined using the extracted times and charges of the
recorded pulses from the IceTop tanks. The first guess
core position is the calculated center of gravity of tank
signals, while the initial direction reconstruction assumes
a flat shower front. This shower core and direction are
then used as a seed to fit the lateral distribution of pulses
with the double logarithmic parabola (DLP) function
described in [9]. Because the reconstruction of the core
and direction are highly correlated, the resolution is
improved by fitting the shower core position and the
shower direction together, with a curved shower front
and the DLP lateral particle distribution. This fit, and
some geometrical quality cuts discussed later, gives an
angular resolution
1
of 1.0
◦
for iron showers (see the dot-
dot-dashed line in Fig. 1) and a core resolution of 15.0 m
(see dot-dashed line in Fig. 2).
The resolution depends strongly on where the shower
core lands with respect to the IceTop array. This analysis
uses only events with a shower core reconstructed within
the geometrical area of the IceTop detector. The recon-
structed muon bundle track was also required to pass
within the instrumented volume of the IceCube detector.
The direction that was already reconstructed by the
IceTop algorithms alone can serve as a seed for an algo-
rithm more specialized in reconstructing muon bundles
in the ice. This algorithm uses only the charges measured
R
x
1
The resolution of observable Y is defined according to
0
p(∆Y ) d∆Y = 0.683, where x is the resolution and p(∆Y )
the frequency of distances between the true and the reconstructed
observable.
Δ Ψ
(degrees)
0
0.5
1
1.5
2
2.5
3
Fraction
0
0.02
0.04
0.06
0.08
0.1
Δ
Δ ΨΨ
distributiondistribution
Fig. 1.
The distribution of the angles between the true direction
and the reconstructed direction for simulations of iron showers is
shown for different algorithms. The dot-dot-dashed curve shows the
reconstruction which uses IceTop information alone. For the dot-
dashed line, the reconstructed core position by IceTop is fixed and the
zenith and azimuth are determined by using a muon bundle algorithm
seeded with the track determined by IceTop. The solid line illustrates
the ideal limit of this reconstruction method by using the true shower
core position instead of the reconstructed one. The dashed curve
is obtained when a second iteration between IceTop and IceCube
algorithms is used.
by the IceCube DOMs and takes into account the range-
out of the muons in a bundle [10]. By keeping the
reconstructed core position on the surface fixed, a large
lever arm of at least 1500 m is obtained. This limits
the track parameters (zenith and azimuth) during the
minimization procedure and reduces the number of free
track parameters from 5 to 2.
In Fig. 1, it can be seen that this method (dot-dashed
line) improves on the angular resolution determined by
IceTop alone. If the core position can be determined
more accurately, the direction reconstruction will be
even better. The solid line on Fig. 1 represents the ideal
limit, obtained using the true core position. Therefore,
to improve the core resolution the new direction is kept
fixed and used to seed the IceTop lateral distribution
function which then only fits the core position (dashed
line on Fig. 2). Iterating over both the surface and the
deep detector reconstructions with progressively better
core position and direction seeds, leads to the optimal
resolution. The ideal limit for the core resolution is
acquired by seeding the IceTop algorithm with the true
direction and is shown on Fig. 2 (solid line). After the
second iteration this limit is already obtained and gives
a core resolution of 14.0 m and an angular resolution of
0.9
◦
(see dashed lines in Figures 1 and 2).
Using this combined method for muon bundle di-
rection reconstruction, an almost energy independent
angular resolution of 0.8
◦
and a core resolution of
12.5 m is obtained for proton induced showers. An im-
provement of the core resolution and direction resolution
also improves the resolution of the reconstructed shower
size and shower age.
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
Δ
R (m)
0
5
10
15
20
25
30
Fraction
0
0.01
0.02
0.03
0.04
Δ
Δ
R
R distributiondistribution
Fig. 2. The distribution of distances between the true shower core
position and the reconstructed core position at the surface for iron
showers is plotted for different reconstructions. For the dash-dotted
line the IceTop DLP function that determines the core position used a
first guess direction. The dashed line, where the IceTop algorithm was
seeded with a better direction reconstruction from the deep ice, gives
a slightly better core reconstruction. When the true direction is used
as a seed, the ideal limit represented by the solid line is obtained.
IV. MUON BUNDLE ENERGY RECONSTRUCTION
An energy reconstruction algorithm for single muons
was previously reported in Ref. [12]. Using IceCube
signals and lookup tables together with a previously
reconstructed track a constant energy loss is fit with a
likelihood function. The lookup tables model the South
Pole ice properties and the propagation of Cherenkov
photons through the ice. A single high-energy muon
above 730 GeV loses energy mainly by radiative pro-
cesses like Bremsstrahlung and pair production, which
produce secondary electromagnetic cascades in the ice
along the muon bundle track. Therefore, an infinite light
source with mono-energetic cascades every meter is used
as a model for a single muon.
This light model can also be used for muon bundles.
The main difference is that a slant depth dependent
energy loss will be needed because of the range-out of
muons. Here, the slant depth is defined as the distance
along the muon bundle track between the point where the
muon enters the ice and the point where the Cherenkov
light is emitted.
The equation for the muon bundle energy loss is:
μ
dE
µ
dX
¶
Bundle
(X) =
Z
E
max(surf)
E
min(surf)
dN
µ
dE
µ
dE
µ
dX
dE
µ
(surf) ,
(1)
where
dN
µ
dE
µ
is the energy distribution of the muons and
dE
µ
dX
is the energy loss of a single muon. E
min(surf)
=
a
b
¡
e
bX
− 1
¢
is the minimum energy that a muon needs
to get to depth X. E
max(surf)
∝
E
0
A
is the maximum
energy a muon from a shower induced by a particle with
A nucleons and primary energy E
0
can have.
Using a simple power-law as an approximation for the
Elbert formula [13], which describes the multiplicity of
Slant Depth (m)
1400 1600 1800 2000 2200 2400 2600 2800
Muon bundle energy loss (GeV/m)
0
20
40
60
80
100
120
140
Reconstructed
Reconstructed slant
slant depth
depth behaviorbehavior
Fig. 3. An example of reconstructed muon bundle energy loss for a
single 17 PeV iron and proton shower. The reconstructed slant depth
behavior follows the true energy loss reasonably well. The triangles
(squares) are the muon bundle energy loss processes for a certain
proton (iron) 17 PeV shower with a zenith angle of 28.4
◦
(12.6
◦
).
The spread of the points illustrates the stochastic nature. The solid
(dashed) line is the reconstructed muon bundle energy loss function,
described in the text, for the proton (iron) shower.
high-energy muons in air showers, the differential muon
energy distribution becomes:
dN
µ
dE
µ
= γ
µ
κ(A)
μ
E
0
A
¶
γ
µ
−1
E
−γ
µ
−1
µ
,
(2)
where γ
µ
= 1.757 is the muon integral spectral index
and κ is a normalization that depends on the shower
properties.
With the solution of the single muon energy loss
equation, E
µ
(X) =
¡
E
µ
(surf) +
a
b
¢
e
−bX
−
a
b
, the
average energy loss formula can be expressed as a
function of the muon energy at the surface:
dE
µ
dX
(X) = −a − bE
µ
(X)
= −b
³
E
µ
(surf) +
a
b
´
e
−bX
,
(3)
with a = 0.260 GeV/m, the ionization energy loss
constant and b = 0.000357 m
−1
, the stochastic energy
loss constant from [11].
The average muon bundle energy loss function is then
obtained by integrating Eq. (1) using Eqs. (2) and (3).
This energy loss fit function, with κ and E
0
/A as free
parameters, will be used to scale the expected charges
in a DOM from the likelihood formula in [12] instead
of scaling it with a constant energy loss.
In Fig. 3, the curves show the reconstructed muon
bundle energy loss functions for 17 PeV primaries. The
data points are the energy losses calculated from simula-
tions. It can be clearly seen that the muon bundle energy
loss function obtained by minimizing the likelihood
function describes the depth behavior for these two
4
T. FEUSELS et al. RECONSTRUCTION OF ICECUBE COINCIDENT EVENTS
(Reconstructed Primary Energy/GeV)
10
log
6
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
7
(Muon Bundle Energy Loss/(GeV/m))
10
log
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
Comparison of Simulation and Data
Proton MC median
Proton MC center 68%
Iron MC median
Iron MC center 68%
Data center 68%
IceCube Preliminary
Fig. 4.
The reconstructed muon bundle energy loss evaluated at a
slant depth of 1650 m versus the reconstructed shower primary energy.
Shown are the center 68% of the proton simulation (right diagonal
hashes), the iron simulation (left diagonal hashes) and the center 68%
of the data (enclosed in the rectangles). The median of the distribution
is also shown for the simulated data sets. The events were filtered as
described in Section II with the additional quality requirement that the
muon bundle reconstruction algorithm used signals from at least 50
DOMs, to remove events with poorly fit energy loss. This quality cut
removes a higher portion of muon bundles with low energy loss and
will be accounted for in a full composition analysis.
showers better than a constant energy loss function.
It has been shown in [2] that proton and iron show-
ers are separated better by the muon bundle energy
loss at smaller slant depths. The reconstructed IceCube
composition-sensitive parameter which will be used fur-
ther on in the coincidence analysis, is the energy loss
at the top of the IceCube detector, at a slant depth of
1650 m. At this slant depth, Cherenkov light from show-
ers with a zenith angle up to 30
◦
can still be detected
by the upper DOMs, making the energy reconstruction
more accurate over the entire zenith range.
V. COMBINED PRIMARY MASS AND ENERGY
RECONSTRUCTION
Fig. 4 shows a comparison of proton and iron primary
simulation and experimental data described in Section
II using the reconstruction methods described in the
sections III and IV. While this plot has only rough
quality cuts, it can be seen that the spread in the
simulation is similar to the spread in the data. The
median muon bundle energy loss from an iron primary
shower is approximately a factor of 2 higher in the
ice than the median muon bundle energy loss from
a proton primary shower for the same reconstructed
primary energy. There is a large overlap area, where
shower to shower fluctuations overcome the effect of the
primary mass. It is difficult to reconstruct the primary
mass of a single shower with any certainty due to these
large fluctuations.
VI. CONCLUSION AND OUTLOOK
The IceTop and IceCube detectors of the IceCube
Neutrino Observatory can be used together for an im-
proved air shower core location and direction reconstruc-
tion. The promising method of reconstructing the muon
bundle energy loss behavior will be further developed
to be used in a measurement of cosmic ray mass and
energy. This study used only a subsample of the available
data; with more statistics and an enlarged detector these
methods can be extended up to 1 EeV.
VII. ACKNOWLEDGEMENTS
This work is supported by the Office of Polar Pro-
grams of the National Science Foundation and by FWO-
Flanders, Belgium.
REFERENCES
[1] F. Kislat et al., A First All-Particle Cosmic Ray Energy Spectrum
From IceTop, These Proceedings.
[2] X. Bai et al., Muon Bundle Energy Loss in Deep Underground
Detector, These Proceedings.
[3] D. Heck et al., CORSIKA FZKA 6019, Forschungszentrum
Karlsruhe, 1998.
[4] R. S. Fletcher et al., SIBYLL: An event generator for simulation
of high energy cosmic ray cascades, Phys. Rev. D, 50 5710,
1994.
[5] R. Engel et al., Air shower calculations with the new version of
SIBYLL, In Proc. 26th ICRC, Salt Lake City, 1999.
[6] A. Fasso` et al., FLUKA: a multi-particle transport code , CERN-
2005-10 (2005), INFN/TC 05/11, SLAC-R-773.
[7] G. Battistoni et al., The FLUKA code: Description and bench-
marking , Proceedings of the Hadronic Shower Simulation Work-
shop 2006, Fermilab 6–8 September 2006, M. Albrow, R. Raja
eds., AIP Conference Proceeding 896, 31-49, (2007).
[8] D. Chirkin, Parameterization based on the MSIS-90-E model,
1997, Private communication.
[9] S. Klepser et al., Lateral Distribution of Air Shower Signals and
Initial Energy Spectrum above 1 PeV from IceTop, In Proc. 30th
ICRC, Merida, Mexico, 2007.
[10] K. Rawlins, Ph.D. Dissertation, UW-Madison (2001).
[11] P. Miocinoˇ
vic,´ Ph.D. Dissertation, UC Berkeley (2001).
[12] S. Grullon et al., Reconstruction of high-energy muon events in
IceCube using waveforms, In Proc. 30th ICRC, Merida, Mexico,
2007.
[13] J.W. Elbert, In Proc. DUMAND Summer Workshop (ed. A.
Roberts), 1978, vol 2, p.101.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Small air showers in IceTop
Bakhtiyar Ruzybayev
∗
, Shahid Hussain
∗
, Chen Xu
∗
and Thomas Gaisser
∗
for the IceCube Collaboration
†
∗
Bartol Research Institute, Department of Physics and Astronomy, University of Delaware, Newark, DE 19716, U.S.A.
†
See the special section of these proceedings.
Abstract
. IceTop is an air shower array that is
part of the IceCube Observatory currently under
construction at the geographic South Pole [1]. When
completed, it will consist of 80 stations covering an
area of 1 km
2
. Previous analyzes done with IceTop
studied the events that triggered five or more stations,
leading to an effective energy threshold of about
0.5PeV [2]. The goal of this study is to push this
threshold lower, into the region where it will overlap
with direct measurements of cosmic rays which
currently have an upper limit around 300 TeV [3]. We
select showers that trigger exactly three or exactly
four adjacent surface stations that are not on the
periphery of the detector (contained events). This
extends the energy threshold down to 150 TeV.
Keywords
: IceTop, Air showers, Cosmic rays
around the “knee”.
I. INTRODUCTION
During 2008, IceCube ran with forty IceTop stations
and forty IceCube strings in a triangular grid with a mean
separation of 125 m. In the 2008–2009 season, additional
38 IceTop tanks and 18 standard IceCube strings were
deployed as shown in Fig.1. When completed, IceCube
will consist of eighty surface stations, eighty standard
strings and six special strings in the ”DeepCore” sub-
array [4]. Each IceTop station consists of two ice filled
tanks separated by 10 m, each equipped with two Digital
Optical Modules (DOMs) [5]. The photo multipliers
inside the two DOMs are operated at different gains
to increase the dynamic range of the response of a
tank. The DOMs detect the Cherenkov light emitted
by charged shower particles inside the ice tanks. Data
recording starts when local coincidence condition is
satisfied, that is when both tanks are hit within a
250 nanoseconds interval. In this paper we used the
experimental data taken with the forty station array and
compared to simulations of this detector configuration.
Here we describe the response of IceTop in its threshold
region.
II. ANALYSIS
The main difference between this study and analyzes
done with five or more stations triggering is the accep-
tance criterion. In previous analyzes, we accepted events
with five or more hit stations and with reconstructed
shower core location within the predefined containment
area (shaded area in Fig.1). In addition, the station with
X (m)
-800
-600
-400
-200
0
200
400
600
Y (m)
-800
-600
-400
-200
0
200
400
600
IceCube strings
IceTop tanks
2009 IceCube strings
2009 IceTop tanks
Fig. 1: Surface map of IceCube in 2009. New stations are
unfilled markers. The shaded area (200795 m
2
) contains
stations that are defined as inner stations.
the biggest signal in the event must also be located
within the containment area.
In the present analysis we used events that triggered
only three or four stations, thus complementing analyzes
with five or more stations. Selection of the events was
based solely on the stations that were triggered. The
criteria are:
1) Triggered stations must be close to each other
(neighboring stations). For three station events,
stations form almost an equilateral triangle. For
four station events, stations form a diamond shape.
2) Triggered stations must be located inside the ar-
ray (shaded region in Fig.1). Events that trigger
stations on the periphery are discarded.
Since we are using stations on the periphery as a veto,
we ensure that our selected events will have shower cores
contained within the boundary of the array. In addition,
these events will have a narrow energy distribution. We
analyzed events in four solid angle bins with zenith
angles
θ
: 0
◦
–26
◦
, 26
◦
–37
◦
, 37
◦
–45
◦
, 45
◦
–53
◦
. Results
for the first bin,
θ = 0
◦
–
26
◦
, are emphasized in this
paper. This near-vertical sample will include most of
the events with muons seen in coincidence with the deep
part of IceCube.
2
B. RUZYBAYEV
et al.
SMALL AIR SHOWERS
Fig. 2: The all-particle spectrum from air shower measurements as summarized in Figure 24.9 of Review of Particle
Physics [3]. The shaded area indicates the range of direct measurements. The thick black line shows the flux model
used for this analysis and the vertical lines indicate the energy range responsible for 95% of the 3 and 4 station
events.
A. Experimental data and simulations
The experimental data used in this analysis were
taken during an eight hour run on September 1st, 2008.
Two sets of air shower simulations were produced:
pure proton primaries in the energy range of 21.4 TeV–
10.0 PeV, and pure iron primaries in the energy range
of 45.7 TeV–10.0 PeV. All air showers were produced in
zenith angle range:
0
◦
≤ θ ≤ 65
◦
.
Our simulation used the following flux model:
dF
dE
= Φ
0
?
E
E
0
?
γ
(1)
Φ
0
= 2.6 · 10
4
GeV
1
s
1
sr
1
m
2
E
0
= 1 TeV
γ = 2.7
for both proton and iron primaries. The normalizations
were chosen such that the fluxes will fit the all particle
cosmic ray spectrum as shown in Fig. 2. Simulated
showers were dropped randomly in a circular area,
around the center of the 40 station array (
X = 100 m
,
Y = 250 m
) with a radius of 600 m.
B. Shower Reconstruction
Since the showers that trigger only three or four
stations are relatively small, we use a plane shower
front approximation and the arrival times to reconstruct
the direction. The shower core location is estimated by
calculating the center of gravity of the square root of the
charges in the stations. For the energy reconstruction
we use the lateral fit method [6] that IceTop uses to
reconstruct events with five or more stations triggered.
This method uses shower sizes at the detector level
to estimate the energy of the primary particle. Heavier
primary nuclei produce showers that do not penetrate as
deeply into the atmosphere as the proton primaries of the
same energy. As a result, iron primary showers will have
a smaller size at the detector level than proton showers
of the same energy. We define a reconstructed energy
based on simulations of primary protons and fitted to the
lateral distribution and size of proton showers. Therefore
the parameter for reconstructed energy underestimates
the energy when applied to showers generated by heavy
primaries. We observe a linear correlation between true
and reconstructed energies in this narrow energy range
and use this to correct the reconstructed energies. We
reconstruct the experimental data assuming pure proton
or pure iron primaries.
C. Effective Area
We use the simulations to determine the effective area
as a function of energy. Effective area is defined as
A
eff
=
Rate[E
min
, E
max
]
∆Ω · F
sum
(2)
F
sum
=
E
?
max
E
min
Φ
0
?
E
E
0
?
γ
dE
(3)
where Rate[E
min
,E
max
] is total rate for a given energy
bin,
∆Ω
is the solid angle of the bin and
F
sum
is the
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
true
/GeV)
10
(E
Log
4
4.5
5
5.5
6
6.5
7
)
2
Effective Area (m
10
2
3
10
10
4
5
10
3 or more stations
5 or more stations
3 stations only
4 stations only
preliminary
Fig. 3: Effective areas for different triggers in the most vertical zenith angle range:
0
◦
≤ θ ≤ 26
◦
, derived using
true quantities from simulations.
reco
/ GeV)
10
(E
Log
4.6
4.8
5
5.2
5.4
5.6
5.8
6
6.2
6.4
Rate / Hz
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
θ
: 0
°
- 26
°
θ
: 26
°
- 37
°
θ
: 37
°
- 45
°
θ
: 45
°
- 53
°
preliminary
(a) Proton simulation
reco
/ GeV)
10
(E
Log
4.6
4.8
5
5.2
5.4
5.6
5.8
6
6.2
6.4
Rate / Hz
0
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.016
0.018
0.02
0.022
0.024
θ
: 0
°
- 26
°
θ
: 26
°
- 37
°
θ
: 37
°
- 45
°
θ
: 45
°
- 53
°
preliminary
(b) Iron simulation
Fig. 4: Reconstructed energy distributions for proton and iron simulations for 3 station events in four zenith bins.
total flux in the given energy bin. Figure 3 shows the
calculated effective areas, using the true values of energy
and direction, for different trigger combinations, in the
most vertical bin (
θ = 0
◦
–
26
◦
).
III. RESULTS
We summarize the results of simulations and compar-
ison to data in Figures 4–6.
In Fig.4, we see that the energy distribution of the
event rates depends on the zenith angle and the primary
type. As expected, the peak of the energy distribution
moves to higher energies for larger zenith angles and
heavier primaries; these features of the distributions will
be very helpful in unfolding the cosmic ray spectrum and
composition.
Figure 5 shows the energy distributions in the most
vertical zenith bin (
θ = 0
◦
–
26
◦
). Experimental data is
reconstructed twice, first with a pure proton assumption,
then with a pure iron assumption. For three stations
triggered (Fig. 5a), the energy distribution for pure
proton simulation with the flux model as defined in (1)
has a better agreement to the experimental data than iron
simulation. For four stations triggered (Fig. 5b), we have
a similar picture but the peaks of the distributions are
shifted to the right since on average we need a higher
4
B. RUZYBAYEV
et al.
SMALL AIR SHOWERS
reco
/ GeV)
10
(E
Log
4.5
5
5.5
6
6.5
Rate / Hz
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
Exp.data, proton assumption
Proton simulation
Exp.data, iron assumption
Iron simulation
preliminary
(a) 3 stations
reco
/ GeV)
10
(E
Log
4.5
5
5.5
6
6.5
Rate / Hz
0
0.005
0.01
0.015
0.02
0.025
0.03
Exp.data, proton assumption
Proton simulation
Exp.data, iron assumption
Iron simulation
preliminary
(b) 4 stations
Fig. 5: Reconstructed energy distributions for 3 and 4 station events with zenith angles
0
◦
–
26
◦
. Experimental data
is reconstructed twice: assuming pure proton and pure iron primary.
cos(
θ
)
0.4
0.5
0.6
0.7
0.8
0.9
1
Rate / Hz
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
Experimental Data
Proton simulation
Iron Simulation
preliminary
Fig. 6: Reconstructed zenith distributions for 3 station
events.
energy primary to trigger four stations. By including
three station events we can lower the threshold down
to 130 TeV.
Figure 6 shows the zenith distributions of the events.
Distribution for pure iron simulation is lower than for
proton simulation since fewer iron primaries reach the
detector level at lower energies. The deficiency of sim-
ulated events in the most vertical bin may be due to the
fact that we used a constant
γ
of 2.7 for all energies
and at these energies
γ
is most probably changing
continuously. In the most vertical bin showers must have
a lower energy than showers at greater zenith angle.
Starting from a lower
γ
and gradually increasing it for
higher energies will increase events in vertical bin and
decrease them at higher energies, thus improving the
zenith angle distribution. It is possible to further improve
the fit of the proton simulation to the experimental data
by adjusting the parameters
γ
and
Φ
0
of the model.
IV. CONCLUSION
We have demonstrated the possibility of extending
the IceTop analysis down to energies of 130 TeV, low
enough to overlap the direct measurements of cosmic
rays. Compared to IceTop effective area for five and
more station hits, our results show a significant increase
in effective area for energies between 100–300 TeV (Fig.
3). We plan to include three and four station events in
the analysis of coincident events to determine primary
composition, along the lines described in [7]. Overall
results of this analysis encourage us to continue and
improve our analysis of small showers.
REFERENCES
[1] T.K. Gaisser et al., “Performance of the IceTop array”, in Proc.
30th ICRC, Me´rida, Mexico, 2007.
[2] F. Kislat et al., “A first all-particle cosmic ray energy spectrum
from IceTop”, this conference.
[3] C. Amsler et al. (Particle Data Group), Physics Letters
B667
, 1
(2008).
[4] A. Karle et al., arXiv:0812.3981, “IceCube: Construction Status
and First Results”.
[5] R. Abbasi et al., “The IceCube Data Acquisition System: Signal
Capture, Digitization, and Timestamping”, Nucl. Instrum. Meth.
A
601
, 294 (2009).
[6] S. Klepser, PhD Thesis, Humboldt-Universita¨t zu Berlin (2008).
[7] T. Feusels et al., “Reconstruction of IceCube coincident events
and study of composition-sensitive observables using both the
surface and deep detector”, this conference.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Cosmic Ray Composition using SPASE-2 and AMANDA-II
K. Andeen
∗
and K. Rawlins
†
For the IceCube Collaboration
‡
∗
University of Wisconsin-Madison, 1150 University Ave, Madison, WI
†
University of Alaska Anchorage, 3211 Providence Dr, Anchorage, AK
‡
See special section of these proceedings
Abstract
. The precise measurement of cosmic ray
mass composition in the region of the knee (3 PeV)
is critical to understanding the origin of high energy
cosmic rays. Therefore, air showers have been ob-
served at the South Pole using the SPASE-2 surface
array and the AMANDA-II neutrino telescope which
simultaneously measure the electronic air shower
component at the surface and the muonic air shower
component in deep ice, respectively. These two com-
ponents, together with a Monte Carlo simulation and
a well-understood analysis method will soon yield the
relative cosmic ray composition in the knee region.
We report on the capabilities of this analysis.
Keywords
:
composition,
cosmic-ray,
neural-
network
I. INTRODUCTION
The mass composition of high-energy cosmic rays
around and above the knee in the energy spectrum
(∼3 PeV) is dependent upon the mechanisms of cosmic
ray production, acceleration, and propagation. Therefore,
the study of mass composition is critical to understand-
ing the origins of cosmic rays in this energy region. At
energies up to 10
14
eV mass composition can be mea-
sured directly using balloon and satellite experiments;
however, due to the low flux, composition above 10
14
eV
must be obtained from indirect measurements on the
ground. Indirect measurements of composition involve
a close examination of the air shower produced as a
cosmic ray primary smashes into Earth’s atmosphere. By
utilizing information from more than one component of
the shower, such as the electronic and muonic compo-
nents, the energy and relative mass can be obtained from
primary particles with much higher energies than those
currently measurable by direct detection techniques.
II. DATA AND RECONSTRUCTION
One such indirect measurement is possible using the
South Pole Air Shower Experiment (SPASE-2) in coin-
cidence with the Antarctic Muon And Neutrino Detector
Array (AMANDA-II). The SPASE-2 detector is situated
on the surface of the South Pole at an atmospheric depth
of
∼685
g cm
2
and is composed of 30 stations in a
30 m triangular grid. Each station contains four 0.2 m
2
scintillators. The AMANDA-II detector lies deep in the
ice such that the center-to-center separation between
the deep ice and the surface arrays is
∼1730
m with
an angular offset of 12
◦
. AMANDA-II consists of 677
optical modules (OMs) deployed on 19 detector strings
spanning depths from 1500-2000 m below the surface of
the ice. Each OM contains a photomultiplier tube (PMT)
which is optimized for detection of the Cherenkov light
emitted by particles—namely muon bundles—passing
through the ice. In addition to the composition analysis,
this coincident configuration allows for calibration as
well as measurement of the angular resolution of the
AMANDA-II detector. [1]
For this analysis, coincident data from the years 2003-
2005 are used, for a total livetime of around 600 days.
For comparison with the data, Monte Carlo simulated
proton, helium, oxygen, silicon and iron air showers
with energies between 100 TeV and 100 PeV have been
produced using the CORSIKA air shower generator with
the SIBYLL/GEISHA hadronic interaction models [2].
At the surface the air showers are injected into GEANT4,
which simulates the SPASE-II detector response [3]. The
showers are then propagated through the ice and the
response of AMANDA-II detector is simulated using the
standard software package of the AMANDA collabora-
tion. An E
1
spectrum is used for generation, but for
analysis the events are re-weighted to the cosmic ray
energy spectrum of E
2.7
at energies below the knee
at 3 PeV and E
3.2
above. Both the experimental data
and the Monte Carlo simulated data are then put through
identical reconstruction chains.
The first step in the reconstruction uses information
from SPASE-2 only. The goal of this first reconstruction
is to find the shower direction, shower size, and core
position of the incoming air shower. The direction can
be computed from the arrival times of the charged
particles in the SPASE-2 scintillators, while the shower
core position and shower size are acquired by fitting the
lateral distribution of particle density to the Nishimura-
Kamata-Greisen (NKG) function. Evaluating the fit at a
fixed distance from the center of the shower (in this case
30 m) [4] gives a parameter called S30, which has units
of particles/m
2
and will be used throughout this paper
as a measure of the electronic part of the air shower.
The next step in the reconstruction provides a measure
of the muon component of the air shower from a
combined reconstruction which uses both the surface
and deep ice detectors. The core position of the shower
from the SPASE-only fit is held fixed while
θ
and
φ
are
varied in the ice to find the best fit of the muon track
in the AMANDA-II detector. Holding the core fixed at
2
K. ANDEEN
et al.
SPASE/AMANDA COMPOSITION
the surface allows for a lever arm of about 1730 m
when calculating directionality, providing a very tight
angular resolution for the track. The expected lateral
distribution function (LDF) of the photons resulting
from the muon bundle in AMANDA-II is computed
and corrected for both the ranging out of muons as
they progress downward through the detector, as well
as the changing scattering length as a function of depth
in the ice caused by dust layers. The LDF is then fit to
the hit optical modules and evaluated at a perpendicular
distance of 50 m from the center of the shower [5]. This
parameter, called K50, has units of photoelectrons/OM
and will be used throughout the rest of this paper as the
measure of the muon component of the air shower.
III. ANALYSIS DETAILS
Once reconstruction has been completed it is impor-
tant to find and eliminate poorly reconstructed events.
Thus events have been discarded which:
•
have a reconstructed shower core outside the area
of SPASE-2 or a reconstructed muon track passing
outside the volume of AMANDA-II,
•
have an unreasonable number of hits in the ice
given S30 at the surface (these events represent
large showers which landed outside of SPASE-
2 and were misreconstructed within the array as
having a small S30),
•
have an unphysical reconstructed attenuation length
of light in the ice (an unphysical reconstruction of
attenuation length will lead to a misreconstructed
value for K50), or
•
are reconstructed independently in SPASE-2 and
AMANDA-II as coming from significantly different
locations in the sky.
After these cuts have been made, it can be seen in Fig. 1
that our two main observables, S30 and K50, form a
parameter space in which primary energy and primary
mass separate. This is expected, since the showers asso-
ciated with the heavier primaries develop earlier in the
atmosphere and hence have more muons per electron
by the time they reach the surface than the showers
associated with lighter primaries [6]. This means that
K50, which is proportional to the number of muons in
the ice, will be higher for heavier primaries than for
lighter primaries of the same S30, as is observed.
In the three-year data set used for this analysis, more
than 100,000 events survive all quality cuts. It is inter-
esting to notice that in the previous analysis, using the
SPASE-2/AMANDA-B10 detector [5], the final number
of events for one year was 5,655. Furthermore, the
larger detector used here is sensitive to higher energy
events. The significant increases in both statistics and
sensitivity, along with a new detector simulation and
revised reconstruction algorithm for the SPASE-2 array,
are the basis for performing a new analysis.
(S30)
10
log
-0.5
0
0.5
1
1.5
2
2.5
3
(K50)
10
log
-2.5
-2
-1.5
-1
-0.5
0
0.5
1
1.5
Proton Showers
GeV
5.6
E
true
= 10
(S30)
10
log
-0.5
0
0.5
1
1.5
2
2.5
3
(K50)
10
log
-2.5
-2
-1.5
-1
-0.5
0
0.5
1
1.5
Iron Showers
GeV
5.6
E
true
= 10
Fig. 1.
The two main observables, log
10
(K50) vs log
10
(S30),
in the Monte Carlo simulation with protons above (red) and iron
below (blue). The black contours depict lines of constant energy from
5.4
>
log
10
(Etrue/GeV)
>
6.8 marked every log
10
(Etrue/GeV) = 0.2.
The black line along the energy gradient approximates a division
between proton and iron showers and is included merely as a reference
between the two plots. It is clear that mass and energy are on a roughly
linear axis.
A. Calibration
To accurately measure the composition using both
electron and muon information, the Monte Carlo sim-
ulations must provide an accurate representation of the
overall amplitude of light in ice (measured here as K50).
However, due to the model-dependencies displayed by
air shower simulations, the overall light amplitude is
subject to systematic errors. Therefore, it is important to
calibrate the composition measurements at low energies
where balloon experiments have provided direct mea-
surements of cosmic ray composition. In light of this, a
vertical “slice” in S30 is selected for calibration which
corresponds with the highest energies measured directly.
At these energies, the direct measurements indicate that
<lnA>≈
2, or
50%
protons and
50%
iron [7]. The K50
values of the data in this S30 “slice” are thus adjusted by
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
)
true
(E
10
)-log
NN
(E
10
log
-0.4
-0.2
0
0.2
0.4
Fraction of Events
0
0.05
0.1
0.15
0.2
0.25
0.3
/GeV) < 5.8
True
(E
10
5.6 < log
)
true
(E
10
)-log
NN
(E
10
log
-0.4
-0.2
0
0.2
0.4
Fraction of Events
0
0.05
0.1
0.15
0.2
0.25
0.3
/GeV) < 6.0
True
(E
10
5.8 < log
)
true
(E
10
)-log
NN
(E
10
log
-0.4
-0.2
0
0.2
0.4
Fraction of Events
0
0.1
0.2
0.3
0.4
/GeV) < 6.2
True
(E
10
6.0 < log
)
true
(E
10
)-log
NN
(E
10
log
-0.4
-0.2
0
0.2
0.4
Fraction of Events
0
0.1
0.2
0.3
0.4
/GeV) < 6.4
True
(E
10
6.2 < log
)
true
(E
10
)-log
NN
(E
10
log
-0.4
-0.2
0
0.2
0.4
Fraction of Events
0
0.1
0.2
0.3
0.4
0.5
/GeV) < 6.6
True
(E
10
6.4 < log
)
true
(E
10
)-log
NN
(E
10
log
-0.4
-0.2
0
0.2
0.4
Fraction of Events
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
/GeV) < 6.8
True
(E
10
6.6 < log
Fig. 2. Energy resolution (the difference between the true primary energy and the energy reconstructed by the neural network) is shown in bins
of true energy for iron (solid blue), proton (dashed red) and oxygen (shaded green) primaries. Each energy bin is bounded by two consecutive
contours from Fig. 1 (where the indicated energy contour is the lower bound of the first energy bin above). For easier comparison, a Gaussian
distribution has been fit to these energy resolution histograms and the mean and sigma of each Gaussian can be found in Table 1.
TABLE I
MEAN AND SIGMA OF A GAUSSIAN DISTRIBUTION FIT TO EACH ENERGY RESOLUTION HISTOGRAM IN FIG. 2.
Shower Type
Gaussian Statistic
True Energy Bins (log
10
(E) / GeV)
5.6 - 5.8
5.8 - 6.0
6.0 - 6.2
6.2 - 6.4
6.4 - 6.6
6.6 - 6.8
Proton
Mean
0.03
0.02
0.00
0.02
0.01
0.00
σ
0.12
0.11
0.08
0.06
0.05
0.04
Oxygen
Mean
-0.02
-0.01
-0.03
0.00
0.01
0.02
σ
0.12
0.10
0.07
0.07
0.05
0.09
Iron
Mean
-0.12
-0.05
-0.04
-0.02
-0.01
0.05
σ
0.15
0.09
0.08
0.08
0.06
0.15
an offset to match the distribution of K50 corresponding
to a
50% 50%
proton and iron sample.
B. Neural Network Reconstruction of Energy and Mass
Similar past investigations utilized the quasi-linear re-
lationship between K50/S30 and mass/energy, as seen in
Fig. 1, to employ an analysis wherein, after calibration,
the axis is rotated to correspond to the mass/energy
coordinate plane [5] [8]. The rotation analysis works
quite well up to energies slightly above the knee;
however, beyond the knee the relationship between
K50/S30 and mass/energy becomes increasingly non-
linear as the air showers approach the energy where
the shower maximum occurs at the atmospheric depth
of the SPASE-2/AMANDA-II detectors. As the data
set used here has significantly more statistics at these
higher energies than previous studies, it was important
to find a new procedure for extracting the composition. A
neural network technique has therefore been developed
to resolve the mean logarithmic mass at all energies [9].
The main change to the neural network technique
since its development has been to distinguish between
the calculation for energy and the calculation for mass
by using two separate networks. The first neural network
(NN1) is trained to find the primary energy by using
log
10
(K50) and log
10
(S30) as input parameters, followed
by one hidden layer. The second network (NN2) is
trained to find the primary mass of the air shower. NN2
also takes as input log
10
(K50) and log
10
(S30) and also
has a single hidden layer of neurons. In both cases
the network is trained through a number of “epochs”,
or training cycles, on half of the simulated proton and
iron showers and tested on the other half of the proton
and iron showers. The results of testing determine the
4
K. ANDEEN
et al.
SPASE/AMANDA COMPOSITION
Mass Output
0
0.2
0.4
0.6
0.8
1
Fraction
0.1
of Events
0.2
NN Trained on All Types
Mass Output
0
0.2
0.4
0.6
0.8
1
Fraction of Events
0.1
0.2
0.3
NN Trained on Protons and Iron Only
Fig. 3. Mass output of the neural network from the bin 6.2
<
log
10
(Ereco/GeV)
<
6.4 for three primary types: iron (solid blue), oxygen
(shaded green), and protons (dashed red). On the left is the output from a network trained on all particle types. On the right is the output from
a network trained only on protons and iron. As expected, when the neural network is trained on more particle types it begins to reconstruct
the intermediate primaries in their proper location as opposed to attempting to classify them as proton or iron.
number of “epochs” each network can be trained through
without overtraining. (The intermediate primaries are
currently used only for checking the mass reconstruction
of the network. It is hoped that they will soon be
plentiful enough to use as inputs for training as well.)
As NN1 is given a full spectrum of energies on which
to train, it very successfully reconstructs the energy of
each shower. Plots of energy resolution separated into
bins of true energy can be found in Fig. 2. The energy
resolution is not only very good but also composition
independent. This can be seen more clearly in Table I,
which shows the Gaussian mean and sigma of each
energy bin from Fig. 2. (Currently only six bins in
energy are shown: higher energies are being generated
and will be available before the conference.)
The mass network outputs a reconstructed mass for
each particle in terms of the primaries on which it has
been trained. As the neural network is trained only on
proton and iron showers, it reconstructs each shower
as some combination of proton and iron. Currently a
minimization technique is used to find the mean log
mass for each energy bin. This minimization technique
has been tested on intermediate primaries and proves to
reconstruct them reasonably well for a network trained
only on protons and iron. However, as the number
of intermediate primaries generated is increasing, it is
hoped that the mass network can soon be trained on a
wider spectrum of primaries. A test of this has been run
and a comparison between the output from a network
trained on all particle types and a network trained only
on protons and iron is shown in Fig. 3. As expected, it is
evident that when the network is trained on intermediate
primaries oxygen is reconstructed in its own location
between protons and iron and no longer as a fraction of
one or the other. This method appears very promising
and, as more intermediate primaries are simulated, this
is the direction in which the analysis will proceed.
IV. DISCUSSION AND OUTLOOK
The SPASE-2/AMANDA-II cosmic ray composition
analysis has lately acquired new Monte Carlo simulation,
a new detector simulation of the surface array and a
revised surface-array reconstruction algorithm. Aided by
these three new features a great deal of progress has
been made. It can clearly be seen that the use of a
modified version of the neural network technique seen in
the ICRC proceedings of 2007 [9] can very accurately
reconstruct the energy of cosmic ray primaries in the
region of the knee in the cosmic ray spectrum. The inclu-
sion of a larger variety of primary particles for training
the neural network is seen to be very promising and,
with increased statistics in the Monte Carlo simulation,
a composition result will soon follow.
V. ACKNOWLEDGMENTS
The authors would like to acknowledge support from
the Office of Polar Programs of the United States Na-
tional Science Foundation as well as the Arctic Region
Supercomputing Center.
REFERENCES
[1] J. Ahrens et al.
Nuclear Instruments and Methods
, 552:347-359,
2004.
[2] D. Heck and T. Pierog,
Extensive Air Shower Simulations
with CORSIKA: A User’s Guide
http://www-ik.fzk.de/corsika/
usersguide/corsika
tech.html
[3] GEANT4 Collaboration
Nuclear Instruments and Methods, A
,
506:250-303, 2003.
[4] J.E. Dickinson et al.
Nuclear Instruments and Methods A
,
440:95-113, 2000.
[5] K. Rawlins. PhD thesis, University of Wisconsin-Madison,
(2001)
[6] T. Gaisser.
Cosmic Rays and Particle Physics
. Cambridge Uni-
versity Press, 1988.
[7] J.R. Ho¨randel.
International Journal of Modern Physics A
,
20(29):6753-6764, 2005.
[8] J. Ahrens et al.
Astroparticle Physics
, 21: 565-581, 2004.
[9] K. Andeen et al,
Proceedings of the 30th International Cosmic
Ray Conference
2007.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Study of High
p
T
Muons in IceCube
Lisa Gerhardt
∗†
and Spencer Klein
∗†
for the IceCube Collaboration
§
∗
Lawrence Berkeley National Laboratory, Berkeley, California 94720
†
University of California, Berkeley, Berkeley, California 94720
§
See the special section of these proceedings
Abstract
. Muons with a high transverse momentum
(
p
T
) are produced in cosmic ray air showers via
semileptonic decay of heavy quarks and the decay
of high
p
T
kaons and pions. These high
p
T
muons
have a large lateral separation from the shower core
muon bundle. IceCube is well suited for the detection
of high
p
T
muons. The surface shower array can
determine the energy, core location and direction
of the cosmic ray air shower while the in-ice array
can reconstruct the energy and direction of the high
p
T
muon. This makes it possible to measure the
decoherence function (lateral separation spectrum) at
distances greater than 150 meters. The muon
p
T
can
be determined from the muon energy (measured by
dE/dx) and the lateral separation. The high
p
T
muon
spectrum may also be calculated in a perturbative
QCD framework; this spectrum is sensitive to the
cosmic-ray composition.
Keywords
: high transverse momentum muons, cos-
mic ray composition, IceCube
I. INTRODUCTION
The composition of cosmic rays with energies above
10
6
GeV is not well known. At these energies, the
flux of cosmic rays is so low that direct detection
with balloon or satellite-borne experiments is no longer
feasible and indirect measurement with larger ground
arrays must be used. These arrays measure the electronic
and hadronic components of a cosmic ray air shower,
and must rely on phenomemological interaction models
to relate observables like muon and electron density
to primary composition. These interaction models are
based on measurements made at accelerators that reach
a maximum energy roughly equivalent to a 10
6
GeV
proton cosmic ray [1]. Extrapolation to the energy of
the detected cosmic rays leads to uncertainties in the
composition of cosmic rays at high energies. An alter-
nate method of determining the composition is to use
muons with a high transverse momentum (
p
T
) [2].
At transverse momenta on the order of a few GeV/c,
the muon
p
T
spectrum can be calculated using per-
turbative QCD (pQCD). Such calculations have been
made for RHIC and the Tevatron, and the data is in
quite good agreement with modern fixed order plus
next-to-leading log (FONLL) calculations [3]. These
experimental studies give us some confidence in pQCD
calculations for air showers.
Most of the high-energy muons that are visible in
IceCube are produced from collisions where a high-
energy (large Bjorken
x
) parton interacts with a low
Bjorken
x
parton in a nitrogen target. These collisions
will produce heavy (charmed and bottom) quarks and
also jets from high
p
T
partons; the jets will fragment
into pions and kaons. Pions and kaons produce “con-
ventional” muons that have a soft spectrum roughly
proportional to E
3.7
[4] and typicallly have a low
p
T
. In
contrast, charm and bottom quarks are produced early in
the shower. The resulting muons (“prompt” muons) have
a harder spectrum, a higher
p
T
, and are the dominant
source of atmospheric muons above 10
5
GeV [5].
If these particles are produced in the forward direction
(where they will be seen by ground based detectors),
they can carry a significant fraction of the energy of
the incident nucleon. The muon energy and
p
T
can be
related to the energy of the partons that make up the
incident cosmic ray. For example, a
10
16
eV proton
has a maximum parton energy of
10
16
eV, while the
maximum parton energy for a
10
16
eV iron nucleus
(with A=56) is
1.8 × 10
14
eV. These two cases have
very different kinematic limits for high
p
T
, high-energy
muon production, so measurements of high
p
T
muon
spectra are sensitive to the cosmic-ray composition.
High
p
T
muons constitute a small fraction of the
muons visible in IceCube; the typical muon
p
T
is of
order a few hundred MeV/c. In this domain of “soft
physics,” the coupling constants are very large, and
pQCD calculations are no longer reliable. One is forced
to rely on phenomenological models; different hadronic
interaction models give rather different results on com-
position [6].
IceCube, a kilometer-scale neutrino telescope, is well
suited for the detection of muons with
p
T
s above a few
GeV/c [2]. When completed in 2011, it will consist of a
1 km
3
array of optical sensors (digital optical modules,
or DOMs) buried deep in the ice of the South Pole and a
1 km
2
surface air shower array called IceTop. IceTop has
an energy threshold of 300 TeV and can reconstruct the
direction of showers with energies above 1 PeV within
1.5
◦
and locate the shower core with an accuracy of
9 m [7]. The in-ice DOMs (here referred to as IceCube)
are buried in the ice 1450 m under IceTop in kilometer-
long strings of 60 DOMs with an intra-DOM spacing of
17 m. IceCube can reconstruct high energy muon tracks
with sub-degree accuracy. The IceTop measurements
can be used to extrapolate the interaction height of the
2
GERHARDT
et al.
HIGH
P
T
MUONS IN ICECUBE
shower and IceCube can measure the energy, position
and direction of the muons. These values, can be used
to calculate the
p
T
:
p
T
=
dE
µ
hc
(1)
where
E
µ
is the energy of the high
p
T
muon,
d
is its
lateral separation and
h
is the interaction height of the
shower, here taken as an average value of 25 km. The
interaction height loosely depends on the composition
and a full treatment of this is planned in the future. Tak-
ing 150 m, 25 meters more than the separation between
strings of DOMs in IceCube, as a rough threshold for
the two-track resolution distance of the high
p
T
muon
from the shower core gives a minimum resolvable
p
T
of
6 GeV/c for a 1 TeV muon.
Multiple scattering and magnetic fields can deflect the
muons as they travel to the IceCube detector, but this
is only equivalent to a few hundred MeV worth of
p
T
and is not a strong effect in this analysis. The high
p
T
muon events are near-vertical which gives them a short
slant depth. In order for muons to reach the IceCube
detector they must have an energy of at about 500 GeV
at the surface of the Earth. These higher energy muons
are deflected less by multiple scattering and the Earth’s
magnetic field.
The combined acceptance for cosmic ray air show-
ers which pass through both IceTop and IceCube is
0.3 km
2
sr for the full 80-string IceCube array [8]. By
the end of the austral summer of 2006/2007, 22 IceCube
strings and 26 IceTop tanks had been deployed. The
combined acceptance for showers that pass through both
IceTop-26 and IceCube-22 is 0.09 km
2
sr. While this ac-
ceptance is too small to expect enough events to generate
a
p
T
spectrum, it offers an excellent opportunity to test
reconstruction and background rejection techniques.
II. PREVIOUS MEASUREMENT OF HIGH
p
T
MUONS
MACRO, an underground muon detector, has previ-
ously measured the separation between muons in air
showers with energies ranging roughly from 10
4
GeV to
10
6
GeV [9]. MACRO measured muon separations out
to a distance of about 65 meters. The average separation
between muons was on the order of 10 m, with 90% of
the muons found with a separation of less than 20 m [9].
MACRO simulated air showers and studied the muon
pair separations as a function of the
p
T
of their parent
mesons. They verified the linear relationship between
p
T
and separation shown in Eq. 1 (with a small offset due
to multiple scattering of the muons) out to momenta up
to 1.2 GeV/c.
III. HIGH
p
T
RATE ESTIMATIONS
The decay of charm will produce roughly 10
6
prompt
muons/year with energies in excess of 1 TeV inside the
0.3 km
2
sr combined acceptance of the full IceCube array
[10]. Different calculations agree well in overall rate in
IceCube, but, at high (1 PeV) energies, differences in
TABLE I: Estimated number of events from charm above
different
p
T
thresholds in 1 year with the 80-string
IceCube array
p
T
[GeV/c]
Separation [m]
Number
≥
6
150
∼
500
≥
8
200
∼
200
≥
16
400
∼
5
parton distribution functions lead to rates that differ by
a factor of 3 [11].
We can roughly estimate the fraction of muons that
are produced at high
p
T
using PYTHIA
pp
simulations
conducted for the ALICE muon spectrometer at a center
of mass energy of 14 TeV, which indicate that roughly
0.6% of these events will have a
p
T
of at least 6 GeV/c
[12]. These simulations are done at a higher center of
mass energy than the bulk of the IceCube events, and
also were for muons produced at mid-rapidity (and,
therefore, higher projectile
x
values and lower target
x
values than in IceCube). However, they should be ade-
quate for rough estimates. These estimated rates of high
p
T
muons are further reduced by approximately a factor
of 10 by requiring that the high
p
T
muon be produced in
coincidence with a shower that triggers IceTop, leaving
a rough expectation of
∼500
events per year with a
p
T
greater than 6 GeV/c in the 80-string configuration
of IceCube. The estimated number of events above a
given
p
T
from the decay of charm is given in Table I
neglecting the uncertainties mentioned above. At higher
p
T
, bottom production may also contribute significantly
to muon production.
The rate of 1 TeV muons from conventional flux is
expected to be about 10
9
events/year in the full IceCube
configuration. The vast majority of these muons will
have a low
p
T
. Based on the
p
T
spectrum measured
from pions produced in 200 GeV center of mass energy
pp
collisions by the PHENIX collaboration, the number
of events expected with
p
T
>
6 GeV/c is estimated to
be 1 in 6
×
10
6
[13]. Requiring a
p
T
>
6 GeV/c and
an IceTop trigger leaves roughly 20 events/year in the
full IceCube configuration from the conventional flux of
muons.
IV. RECONSTRUCTION METHODS
High
p
T
muons will appear as a separate track coinci-
dent in time and parallel with the track from the central
core of low
p
T
muons. Generally the bundle of low
p
T
muons leaves more light in the detector than the high
p
T
muon.
Current reconstruction algorithms in IceCube are de-
signed to reconstruct single tracks. In order to recon-
struct these double-track events the activated DOMs
are split into groups using a k-means clustering algo-
rithm [14]. The first group is the muon bundle and
the second group is the high
p
T
muon. Each group is
then reconstructed with a maximum likelihood method
that takes into account the scattering of light in ice.
After this initial reconstruction, the groups are resplit
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
Bundle True
Θ
- Reco
Θ
-30
-20
-10
0
10
20
30
0
10
20
30
40
50
True Split
σ
=2.2
Muon Bundle
Reco Split
σ
=2.7
Muon True
Θ
- Reco
Θ
-30
-20
-10
0
10
20
30
0
5
10
15
20
25
30
35
40
T
Muon
True Split
σ
=3.0
High p
Reco Split
σ
=4.1
Fig. 1: Zenith angle resolution of the high
p
T
reconstruc-
tion algorithm. The sigma are the results of Gaussian fits
to the distributions.
according to their time residual relative to the muon
bundle reconstruction and re-fit. The first splitting forces
the activated DOMs into two groups, which is not correct
for the background events which don’t have a high
p
T
muon and generate only a single shower in the array.
Splitting the activated DOMs a second time according
to their time residual allows for the possibility for all
the activated DOMs to end up in a single group. Figure
1 shows the performance of this clustering algorithm.
The zenith angle resolution for groups determined using
the true simulation information (black, solid lines) is
compared to the resolution for groups determined using
the clustering algorithm (red, dashed lines) for the muon
bundle (top) and high
p
T
muon (bottom). Roughly 20%
of the events fail to reconstruct because there are not
enough DOMs in one of the groups.
These reconstruction algorithms achieve a zenith an-
gle resolution of 2.7
◦
for the muon bundle and 4.1
◦
for
a high
p
T
muon separated by 400 m. The resolution is
worse for the high
p
T
muon because fewer DOMs are
activated. While high
p
T
muons with a greater separation
are much easier to resolve with the two track algorithm,
they also tend to be lower energy (see Eq. 1) and activate
fewer DOMs. The average number of DOMs activated
by the high
p
T
muon is 12, compared to 50 for the muon
bundle. Additionally, because high
p
T
muons have less
activated DOMs, a DOM activated by the muon bundle
that is incorrectly placed in the high
p
T
group has a
much larger effect on the reconstruction of the high
p
T
track. These factors lead to a poorer resolution in the
high
p
T
track direction.
V. SIGNAL AND BACKGROUND SEPARATION
Many of the processes that can generate a high
p
T
muon are not properly modeled or included in COR-
SIKA [15] with the exception of the modified DPMJET
model discussed in [16]. Also, since high
p
T
muons
will occur in only a fraction of simulated showers
making simulation very time intensive, a toy model
based on CORSIKA proton showers was used to model
the signal. A single muon is inserted into an existing
CORSIKA event containing a muon bundle from an air
shower. This modified shower is then run through the
standard IceCube propagation, detector simulation, and
reconstruction routines. Simulations insert a muon with
energy of 1 TeV separated 100, 150, 200, and 400 m
from the shower core. This gave a clean sample ideal for
the development of reconstruction routines optimized to
identify simultaneous parallel tracks inside the detector.
Cosmic rays air showers that do not generate a high
p
T
muon are considered a background to this search.
Since they generate only a single shower in the array,
these events are mostly eliminated by requiring there be
two well-reconstructed tracks in the IceCube detector.
These single showers are well-reconstructed by a single
track hypothesis, while the high
p
T
muon events are not.
Figure 2 shows the negative log of the reduced likelihood
of a single track reconstruction for single showers, and
showers with an inserted 4, 8, and 16 GeV/c
p
T
, 1 TeV
muon (separation of 100 m, 200 m, and 400 m from
the shower core, assuming an average interaction height
of 25 km). Well reconstructed events have a lower value
on this plot. For large separations, this variable separates
single showers from showers which contain a high
p
T
muon. When the separation between the high
p
T
muon
and the shower core drops below the interstring distance
(the blue, dot-dashed line in Figure 2), it is no longer
possible to cleanly resolve the high
p
T
muon from the
shower core and the event looks very similar to a single
shower.
-Log (Single Track Reduced Likelihood)
6
6.5
7
7.5
8
8.5
9
9.5
10
Arbitrary Units
0
0.05
0.1
0.15
0.2
0.25
4 GeV/c Muon
8 GeV/c Muon
16 GeV/c Muon
Single CR Shower
Fig. 2: Negative log of the reduced likelihood for the
single track reconstruction
The IceCube 22-string configuration is large enough
that the rate of simultaneous events from cosmic rays
is significant. Muon bundles from two (or more) air
showers can strike the array within the 10
µs
event
window, producing two separated tracks. These so-called
double-coincident events are the dominant background
for air showers with high
p
T
muons. Since these double-
coincident events are uncorrelated in direction and time,
requiring that both reconstructed tracks be parallel and
occur within 1
µs
can eliminate most of these events.
4
GERHARDT
et al.
HIGH
P
T
MUONS IN ICECUBE
Due to misreconstructions some events will survive
this selection, and criteria to eliminate these misre-
constructed events are being developed. However, an
irreducible background remains from double-coincident
events that happen to come from roughly the same
direction and time.
Fig. 3: Candidate shower with a high
p
T
muon. The
colored circles show DOMs that are activated by light.
The color and size of the circle corresponds to the
time (red being earliest) and magnitude of the signal,
respectively and the white dots indicate DOMs that
are not activated in the event. The red lines show the
reconstructed tracks.
After applying selection criteria to reduce the cosmic
ray background, a 10% sample of the IceCube data was
scanned by eye for high
p
T
muon candidates. Several
events were found and Figure 3 shows a representative
event. The two tracks occur within 600 ns of each other
and the spaceangle between the two reconstructions is
5.6
◦
.
VI. HORIZONTAL EVENTS
One possible method to avoid the backgrounds from
single and double-coincident cosmic ray air showers is to
search for events that come from the horizontal direction.
At a zenith angle of 60
◦
the slant depth through the
ice is about 3 km. Only muons with energies in excess
of several TeV will reach IceCube [17] and the event
multiplicity is greatly reduced. Because the intra-DOM
spacing is 17 m, compared to the intra-string spacing
of 125 m relevant for vertical events, the two-track
resolution should improve. A disadvantage to searching
in the horizontal direction is the increase in the deflection
from the magnetic field and multiple scattering with the
slant depth, which could lead to a greater spread in the
muon bundle and create event topologies which mimic
a high
p
T
muon event. Nevertheless, the advantage of
background suppression is strong. In addition to high
p
T
muons in cosmic rays, there are a number of other
processes (such as the decay of the supersymmetric
stau [18]) that can produced horizontal parallel tracks,
making the horizon an interesting direction.
A search was conducted in 10% of the IceCube data
from 2007 for these horizontal events using simple topo-
logical selection criteria. Events which reconstructed
Fig. 4: Candidate double track near-horizontal event
within 30
◦
of horizontal were selected and searched for
a double track topology. Several interesting candidate
events were found, one is shown in Figure 4. The two
tracks occur within 1 ns of each other and the spaceangle
between the two reconstructions is 33
◦
.
VII. CONCLUSIONS
IceCube is large enough that the study of high
p
T
muons has become viable. Resolution of tracks separated
by at least 150 meters from the shower core will allow
identification of muons with
p
T
s of at least 6 GeV/c. A
two-track reconstruction algorithm has been developed
and selection criteria for identification of air showers
with high
p
T
muons is under development. A search for
horizontal double tracks is also under way. The rate of
high
p
T
muon production is sensitive to the composition
of the cosmic rays and offers an alternative to existing
composition studies.
This work is made possible by support from the NSF
and the DOE.
REFERENCES
[1] R. Engel, Nuclear Physics B Proc. Suppl.,
151
:437 (2006).
[2] S. Klein and D. Chirkin, 30th ICRC (2007), arXiv:0711.0353v1.
[3] R. Vogt, M. Cacciari and P. Nason, Nucl. Phys.
A774
, 661
(2006).
[4] T.K. Gaisser, “Cosmic Ray and Particle Physics,” Cambridge
University Press, 1990.
[5] L. Pasquali, M. Reno, and I. Sarcevic, Phys. Rev. D
59
:034020
(1998).
[6] W. Apel
et al.
, Astropart. Phys.
29
:412 (2008); J. Bluemer et
al,arXiv:0904.0725.
[7] S. Klepser, 21st European Cosmic Ray Symposium (2008),
arXiv:0811.1671.
[8] X. Bai, 30th ICRC (2007), arXiv:0711.0353v1.
[9] Ambrosio
et al.
, Phys. Rev. D
60
:032001 (1999).
[10] M. Thunman, P. Gondolo, and G. Ingelman, Astropar. Phys.
5
:309 (1996).
[11] R. Enberg, M. Reno and I. Sarcevic, Phys. Rev. D
78
:043005
(2008).
[12] Z. Conesa del Valle, The European Physical Journal C,
49
:149
(2007).
[13] S. Adler
et al.
, Phys. Rev. Lett.
91
:241803 (2003).
[14] D. MacKay “Information Theory, Inference and Learning Algo-
rithms,” Cambridge University Press (2003).
[15] D. Heck, Forschungszentrum Karlsruhe Report FZKA 6019
(1998).
[16] P. Berghaus, T. Montaruli, and J. Ranft, Journal of Cosmology
and Astropar. Phys.
6
:003 (2008).
[17] P. Berghaus, these proceedings.
[18] A. Olivas, these proceedings.
1
Large Scale Cosmic Rays Anisotropy With IceCube
Rasha U Abbasi
∗
, Paolo Desiati
∗
and Juan Carlos Velez
∗
for the IceCube Collaboration
†
∗
University of Wisconsin, IceCube Neutrino Observatory, Madison, WI 53703, USA
†
http://icecube.wisc.edu/
Abstract
. We report on a study of the anisotropy
in the arrival direction of cosmic rays with a median
energy per Cosmic Ray (CR) particle of
∼ 14
TeV
using data from the IceCube detector. IceCube is
a neutrino observatory at the geographical South
Pole, when completed it will comprise 80 strings
plus 6 additional strings for the low energy array
Deep Core. The strings are deployed in the deep ice
between 1,450 and 2,450 meters depth, each string
containing 60 optical sensors. The data used in this
analysis are the data collected from April 2007 to
March 2008 with 22 deployed strings. The data
contain
∼ 4.3
billion downward going muon events.
A two-dimensional skymap is presented with an
evidence of
0.06%
large scale anisotropy. The energy
dependence of this anisotropy at median energies per
CR particle of 12 TeV and 126 TeV is also presented
in this work. This anisotropy could arise from a
number of possible effects; it could further enhance
the understanding of the structure of the galactic
magnetic field and possible cosmic ray sources.
Keywords
: IceCube, Cosmic rays, Anisotropy.
I. I
NTRODUCTION
The intensity of Galactic Cosmic Rays (GCRs) have
been observed to show sidereal anisotropic variation on
the order of 10
−4
at energies in the range of 1-100
TeV ([1], [2] and [3]). This anisotropy could arise from
number of different combination of causes. One possible
cause could be the Compton Getting (CG) effect. This
effect was proposed in 1935 [4] predicting that CR
anisotropy could arise from the movement of the solar
system around the galactic center with the velocity of
∼ 220 kms
−1
such that an excess of CR would be
present in the direction of motion of the solar system
while a deficit would appear in the opposite direction.
Another possible effect (proposed by Nagashima
et.
al.
[1]) causing the excess in the anisotropy, which was
referred to as ”tail-in”, originates from close to the tail
of the hemisphere. While the deficit in the anisotropy,
which was referred to as ”loss-cone”, originates from a
magnetic cone shaped structure of the galactic field in
the vicinity region.
In this paper we present results on the observation of
large scale cosmic ray anisotropy by IceCube. Previous
experiments have published a 2-dimensional skymap of
the northern hemisphere sky ([2]- [5]). This measure-
ment presents the first 2-dimensional skymap for the
southern hemisphere sky. In addition, we present the
energy dependence of this anisotropy at median energies
per CR particle of 12 TeV and 126 TeV.
The outline of the paper is: the second section will
describe the data used in this analysis, the analysis
method and the challenges. The third section will discuss
the results and the stability checks applied to the data.
The fourth section will discuss the anisotropy energy
dependence and the last section is the conclusion.
II. D
ATA ANALYSIS
The data used in this analysis are the downward going
muons collected by the IceCube neutrino observatory
comprising 22 strings. The data were collected from
June 2007 to March 2008. The events used in this
analysis are those reconstructed by an online one iter-
ation Likelihood (LLH) based reconstruction algorithm.
The events selected online require at least ten triggered
optical sensors on at least three strings. The average rate
of these events is ∼ 240 Hz (approximately 40 % of the
events at triggering level). Further selection criteria are
applied to the data to ensure good quality and stable
runs. The final data set consists of 4.3 × 10
9
events
with a median angular resolution (angle between the
reconstructed muon and the primary particle) of 3
◦
and
a median energy per CR particle of 14 TeV as simulated
according to (CORSIKA [6], SIBYLL [7], Ho¨randel [8]
).
In this analysis we are searching for a high precision
anisotropy. The sidereal variation of the CR intensity
is induced by the anisotropy in their arrival direction.
However, it can also be caused by the detector exposure
asymmetries, non-uniform time coverage, diurnal and
seasonal variation of the atmospheric temperature. Apart
from these effects the remaining variations can only be
of galactic origin.
Due to the unique location of IceCube at the South
Pole the detector observes the sky uniformly. This is not
the case for all the previous experiments searching for
large scale cosmic ray anisotropy (e.g. [2], [3], [5]).
Due to their locations they need a whole solar day to
scan the entire sky. As a result they need to eliminate the
diurnal and seasonal variations using various approaches.
For IceCube the diurnal variation does not effect the
sidereal variation because the whole sky is fully visible
to the detector at any given time and because there
is only one day and one night per year. In addition,
although the seasonal variation is on the order of 20%
the variation is slow and does not affect the daily muon
intensity significantly.
2
The remaining challenge for this analysis is account-
ing for the detector asymmetry, and unequal time cover-
age in the data due to the detector run selection. To illus-
trate the detector asymmetry Figure 1 shows the IceCube
22 string geographical configuration. This geographical
asymmetry results in a preferred reconstructed muon
direction since the muons would pass by more strings
in one direction in the detector compared to another.
The combination of detector event asymmetry with a
non-uniform time coverage would induce an azimuthal
asymmetry and consequently artificial anisotropy of the
arrival direction of cosmic rays. This asymmetry is
corrected for by normalizing the azimuthal distribution.
Figure 2 shows the azimuthal distribution for the
whole data set. It displays the number of events vs.
the azimuth of the arrival direction of the primary
CR particle. Note that the asymmetry in the azimuthal
distribution due to detector geometry is modeled well by
simulation.
To correct for the detector azimuthal asymmetry we
apply an azimuthal normalization. The azimuthal distri-
bution is parameterized by N, n
i
, and nˉ, where N is the
total number of bins, n
i
is the number of events per bin
and nˉ is the average number of events, nˉ =
1
N
?
N
i=1
n
i
.
nˉ is denoted by the horizontal red line in Figure 2. The
azimuthal normalization is applied by weighting each
eventby
nˉ
n
i
for that event.
In addition to the azimuthal asymmetry we also
observe a zenith angle asymmetry (more events arrive
from the zenith than from the horizon). Due to this
declination dependence, the sky is divided into four
declination bands such that the data is approximately
equally distributed among the declination bands. For
each band the azimuthal distribution is normalized for
the whole year. The relative intensity for each bin in the
2-dimensional skymap is then calculated by dividing the
number of events per bin by the total number of events
per declination.
III. R
ESULTS:
Figure 3 shows the southern hemisphere skymap
for well reconstructed downward going muons for the
IceCube 22-Strings data set. The skymap is plotted in
equatorial coordinates. The color scale represents the
relative intensity of the rate for each bin per declination
band where each bin rate is calculated by dividing
the number of events for that bin over the average
number of events for that bin’s declination band. The
plot shows a large scale anisotropy in the arrival di-
rection of cosmic rays. The amplitude and the phase
of this anisotropy is determined by projecting the 2-
dimensional skymap in Right Ascension (RA) as shown
in Figure 4. Figure 4 shows the relative intensity vs.
the RA. The data are shown in points with their error
bars. The fit is the second-order harmonic function in
the form of
?
n=2
i=1
(A
i
× cos(i ((RA) − φ
i
))) + B where
A
i
is the amplitude and φ
i
is the phase and B is a
constant. The fit A
i
, φ
i
and χ
2
/ndof for the second
Fig. 1: The IceCube detector configuration. The filled
green circles are the positions of IceCube strings and
the filled blue circles display the position of the IceTop
tanks.
Azimuth
0
50
100
150
200
250
300
350
Number of Events
70
80
90
100
110
120
130
×
10
6
Fig. 2: The azimuthal distribution for the whole data set.
This plot shows the number of events vs. the azimuth
of the arrival direction of the primary CR particle. The
horizontal red line in the average number of events for
the distribution.
harmonic fit are listed in table I. The significance of
the 2-dimensional skymap is shown in Figure 5. The
significance is calculated for each bin from the average
number of events for that bin’s declination band. Note
that the significance of several bins in the excess region
is greater than 4σ and in some bins in the deficit region
is smaller the −4σ.
A
1
.
(10
−4
)
φ
1
A
2
.
(10
−4
)
φ
2
χ
2
/ndof
6.4±0.2 66.4
◦
±2.6
◦
2.1±0.3 −65.6
◦
±4
◦
22/19
TABLE I: The second harmonic fit amplitude, phase,
and χ
2
/ndof.
To check for the stability of the measured large scale
3
Fig. 3: The IceCube skymap in equatorial coordinates
(Declination (Dec) vs. Right Ascension (RA)). The color
scale is the relative intensity.
Right Ascension
0
50
100
150
200
250
300
350
Relative Intensity
0.9985
0.999
0.9995
1
1.0005
1.001
1.0015
Fig. 4: The 1-dimensional projection of the IceCube 2-
dimensional skymap. The line is the second harmonic
function fit.
Fig. 5: The IceCube significance skymap in equatorial
coordinates (Dec vs. RA). The color scale is the signif-
icance.
anisotropy we performed a number of checks with the
data set. One check was applied by dividing the data into
two sets where one set contains sub-runs with an even
index number and the other set contains sub-runs with
an odd index number (a sub-run on average contains
events collected for ∼ 20 minutes at a time). Another
check is applied by dividing the data into two sets each
set contains half the sub-runs selected randomly. The
results for both tests were consistent.
In addition, stability checks are applied to test for
daily and seasonal variation effects. To test for the daily
variation effect the data were divided in two sets: The
first set contains sub-runs with rate values greater than
the mean rate value for that sub-run’s day. The second
set contains sub-runs with rate values less than the
mean rate value for that sub-run’s day. Furthermore,
to test for the seasonal variation effect the data set is
divided in two sets: The first set holds the winter month’s
sub-runs (June-Oct.). The second set holds the summer
month’s sub-runs (Nov.-March). For both tests we see
no significant changes in the value of the anisotropy of
the two data sets.
IV. E
NERGY DEPENDENCE:
In order to better understand the possible nature of
the anisotropy we have searched for energy dependent
effects using our data. To determine the energy de-
pendence for the signal we divided the data into two
energy bins. To accomplish that and to ensure constant
energy distribution along our sky both the number of
sensors triggered by the event and the zenith angle of
the event are used for the energy bands selection. The
first energy bin contains 3.8 × 10
9
events with a median
per CR particle of 12.6 TeV and 90% of the events
between 2 and 158 TeV. The second energy bin contains
9.6 × 10
8
events with a median energy per CR particle
of 126 TeV and 90% of the events between 10 TeV
and 1 PeV. Each 2-dimensional skymap is projected
to 1-dimensional variations in RA. In comparison to
previous experiments the 1-dimensional RA distribution
is fitted to a first harmonic fit. The first harmonic
fit amplitude and phase for the first energy band are
A
1
= (7.3 ± 0.3) × 10
−4
and φ
1
= 63.4
◦
± 2.6
◦
, while
the amplitude and phase for the second energy band are
A
1
= (2.9±0.6)×10
−4
and φ
1
= 93.2
◦
±12
◦
. Figure 6
shows the amplitude of this analysis (in filled circles) in
comparison to previous experiments (In empty squares).
Note that the amplitude in this analysis shows a decrease
of the harmonic amplitude value at the higher energies
for the energy ranges of 10-100 TeV.
10
12
10
13
10
14
10
-5
10
-4
10
-3
Fig. 6: The filled circles markers are the
result of this analysis and the empty square
markers are the result from previous experiments
([3], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18],
[19], [20], [21], [22], [23], [24], [25]).
4
V. CONCLUSION
In this analysis we present the first 2-dimensional
skymap for the southern hemisphere of 4.3 billion cos-
mic rays with a median angular resolution of 3
◦
and a
median energy per CR particle of 14 TeV as observed by
IceCube. A large cosmic ray anisotropy with a second
harmonic vector amplitude of A
1
= (6.4 ± 0.2) × 10
−4
and a phase of φ
1
= 66.4
◦
± 2.6
◦
is observed. The
significance of some bins in the excess and the deficit
regions were found to be > |4σ|. This anisotropy is an
extension of previously measured large scale anisotropy
at the northern hemisphere reported by multiple experi-
ments ([2]- [5]).
In addition, we report on the anisotropy energy de-
pendence. We report the amplitude of the first harmonic
vector of the anisotropy for the two energy bands.
The first energy band with a median energy per CR
particle of 12.6 TeV the amplitude is found to be
A
1
= (7.3±0.3)×10
−4
. The second energy band with a
median energy per CR particle of 126 TeV the amplitude
is found to be A
1
= (2.9 ± 0.6) × 10
−4
. The amplitude
energy dependence is found to follow a decreasing trend
with energy.
VI. A
CKNOWLEDGMENTS
We acknowledge the support from the following agen-
cies: U.S. National Science Foundation-Oce of Polar
Program, U.S. National Science Foundation-Physics Di-
vision, University of Wisconsin Alumni Research Foun-
dation, U.S. Department of Energy, and National En-
ergy Research Scientic Computing Center, the Louisiana
Optical Network Initiative (LONI) grid computing re-
sources; Swedish Research Council, Swedish Polar Re-
search Secretariat, and Knut and Alice Wallenberg
Foundation, Sweden; German Ministry for Education
and Research (BMBF), Deutsche Forschungsgemein-
schaft (DFG), Germany; Fund for Scientic Research
(FNRS-FWO), Flanders Institute to encourage scientic
and technological research in industry (IWT), Belgian
Federal Science Policy Oce (Belspo); the Netherlands
Organisation for Scientic Research (NWO); M. Ribordy
acknowledges the support of the SNF (Switzerland); A.
Kappes and A. Groß acknowledge support by the EU
Marie Curie OIF Program.
R
EFERENCES
[1] K. Nagashima
et. al Journal of Geophysical Research
, vol. 103,
pp. 17429–17440, Aug. 1998.
[2] M. Amenomori
et. al Science
, vol. 314, pp. 439–443, Oct. 2006.
[3] G. Guillian and for the Super-Kamiokande Collaboration
Physi-
cal Review D
, vol. 75, p. 062003, 2007.
[4] A. H. Compton and I. A. Getting
Physical Review
, vol. 47,
pp. 817–821, June 1935.
[5] A. Abdo
et. al. ArXiv:astro-ph/0806.2293
, 2008.
[6] CORSIKA
http://www-ik.fzk.de/corsika/
.
[7] R. Engel vol. 1 of
International Cosmic Ray Conference
, p. 415,
1999.
[8] J. R. Horandel¨
Astroparticle Physics
, vol. 19, pp. 193–220, May
2003.
[9] K. Munakata
et. al. Physical Review D
, vol. 56, p. 23, 1997.
[10] M. Ambrosio
et. al. Physical Review D
, vol. 67, p. 042002, 2003.
[11] D. Swinson
et. al. Planet. Space Sci.
, vol. 33, p. 1069, 1985.
[12] K. Nagashima
et. al. Planet. Space Sci.
, vol. 33, p. 395, 1985.
[13] H. Ueno
et. al.
vol. 6 of
International Cosmic Ray Conference
,
p. 361, 1990.
[14] T. Thambyahpillai vol. 3 of
International Cosmic Ray Confer-
ence
, p. 383, Aug. 1983.
[15] K. Munakata
et. al.
vol. 4 of
International Cosmic Ray Confer-
ence
, p. 639, 1995.
[16] S. Mori
et. al.
vol. 4 of
International Cosmic Ray Conference
,
p. 648, 1995.
[17] M. Bercovitch
et. al.
vol. 10 of
International Cosmic Ray
Conference
, p. 246, 1981.
[18] B. K. Fenton
et. al.
vol. 4 of
International Cosmic Ray Confer-
ence
, p. 635, 1995.
[19] Y. W. Lee
et. al.
vol. 2 of
International Cosmic Ray Conference
,
p. 18, 1987.
[20] D. J. Cutler
et. al. Astrophys. J.
, vol. 376, pp. 322–334, July
1991.
[21] Y. M. Andreyev
et. al.
vol. 2 of
International Cosmic Ray
Conference
, p. 22, 1987.
[22] K. Munakata vol. 7 of
International Cosmic Ray Conference
,
p. 293, 1999.
[23] V. V. Alexeyenko
et. al.
vol. 2 of
International Cosmic Ray
Conference
, p. 146, 1981.
[24] M. Aglietta
et. al.
vol. 2 of
International Cosmic Ray Conference
,
p. 800, 1995.
[25] T. Gombosi
et. al.
vol. 2 of
International Cosmic Ray Confer-
ence
, pp. 586–591, 1975.
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
Atmospheric Variations as observed by IceCube
Serap Tilav
∗
, Paolo Desiati
†
, Takao Kuwabara
∗
, Dominick Rocco
†
,
Florian Rothmaier
‡
, Matt Simmons
∗
, Henrike Wissing
§¶
for the IceCube Collaboration
?
∗
Bartol Research Institute and Dept. of Physics and Astronomy, University of Delaware, Newark, DE 19716, USA.
†
Dept. of Physics, University of Wisconsin, Madison, WI 53706, USA.
‡
Institute of Physics, University of Mainz, Staudinger Weg 7, D-55099 Mainz, Germany.
§
III Physikalisches Institut, RWTH Aachen University, D-52056 Aachen, Germany.
¶
Dept. of Physics, University of Maryland, College Park, MD 20742, USA.
?
See the special section of these proceedings
Abstract
. We have measured the correlation of rates
in IceCube with long and short term variations in
the South Pole atmosphere. The yearly temperature
variation in the middle stratosphere (30-60 hPa) is
highly correlated with the high energy muon rate
observed deep in the ice, and causes a
±
10% seasonal
modulation in the event rate. The counting rates of
the surface detectors, which are due to secondary
particles of relatively low energy (muons, electrons
and photons), have a negative correlation with tem-
peratures in the lower layers of the stratosphere
(40-80 hPa), and are modulated at a level of
±
5%.
The region of the atmosphere between pressure levels
20-120 hPa, where the first cosmic ray interactions
occur and the produced pions/kaons interact or decay
to muons, is the Antarctic ozone layer. The anti-
correlation between surface and deep ice trigger
rates reflects the properties of pion/kaon decay and
interaction as the density of the stratospheric ozone
layer changes. Therefore, IceCube closely probes the
ozone hole dynamics, and the temporal behavior of
the stratospheric temperatures.
Keywords
: IceCube, IceTop, South Pole
I. INTRODUCTION
The IceCube Neutrino Observatory, located at the
geographical South Pole (altitude 2835m), has been
growing incrementally in size since 2005, surrounding
its predecessor AMANDA[1]. As of March 2009, Ice-
Cube consists of 59 strings in the Antarctic ice, and
59 stations of the IceTop cosmic ray air shower array
on the surface. Each IceCube string consists of 60
Digital Optical Modules (DOMs) deployed at depths of
1450-2450m, and each IceTop station comprises 2 ice
Cherenkov tanks with 2 DOMs in each tank.
IceCube records the rate of atmospheric muons with
E
µ
≥
400 GeV. Muon events pass the IceCube Simple
Majority Trigger (InIce SMT) if 8 or more DOMs are
triggered in 5
µ
sec. The IceTop array records air showers
with primary cosmic ray energies above 300 TeV. Air
showers pass the IceTop Simple Majority Trigger (Ice-
Top SMT) if 6 or more surface DOMs trigger within
5
µ
sec.
As the Antarctic atmosphere goes through seasonal
changes, the characteristics of the cosmic ray interac-
tions in the atmosphere follow these variations. When
the primary cosmic ray interacts with atmospheric nu-
clei, the pions and kaons produced at the early interac-
tions mainly determine the nature of the air shower. For
the cosmic ray energies discussed here, these interactions
happen in the ozone layer (20-120 hPa pressure layer
at 26 down to 14 km) of the South Pole stratosphere.
During the austral winter, when the stratosphere is cold
and dense, the charged mesons are more likely to interact
and produce secondary low energy particles. During the
austral summer when the warm atmosphere expands and
becomes less dense, the mesons more often decay rather
than interact.
Figure 1 demonstrates the modulation of rates in
relation to the temporal changes of the South Pole strato-
sphere. The scaler rate of a single IceTop DOM on the
surface is mostly due to low energy secondary particles
(MeV electrons and gammas,
∼1
GeV muons)[2] pro-
duced throughout the atmosphere, and therefore highly
modulated by the atmospheric pressure. However, after
correcting for the barometric effect, the IceTop DOM
counting rate also reflects the initial pion interactions in
the middle stratosphere. The high energy muon rate in
the deep ice, on the other hand, directly traces the decay
characteristics of high energy pions in higher layers
of the stratosphere. The IceCube muon rate reaches its
maximum at the end of January when the stratosphere
is warmest and most tenuous. Around the same time
Icetop DOMs measure the lowest rates on the surface
as the pion interaction probability reaches its minimum.
The high energy muon rate starts to decline as the
stratosphere gets colder and denser. The pions will
interact more often than decay, yielding the maximum
rate in IceTop and the minimum muon rate in deep ice
at the end of July.
II. ATMOSPHERIC EFFECTS ON THE ICECUBE RATES
The Antarctic atmosphere is closely monitored by the
NOAA Polar Orbiting Environmental Satellites (POES)
and by the radiosonde balloon launches of the South
Pole Meteorology Office. However, stratospheric data
2
ATMOSPHERICVARIATION
180
200
220
240
260
T
p
[K]
(a)
p=20-30hPa
30-40hPa
40-50hPa
50-60hPa
60-70hPa
70-80hPa
80-100hPa
1200
1300
1400
1500
1600
1700
640
660
680
700
720
740
IceTop DOM count. [Hz]
Pressure [hPa]
(b)
Observed
Pressure corrected
Pressure
900
1000
1100
1200
05/01
2007
07/01
2007
09/01
2007
11/01
2007
01/01
2008
03/01
2008
05/01
2008
07/01
2008
09/01
2008
11/01
2008
01/01
2009
03/01
2009
180
200
220
240
IceCube muon rate [Hz]
T
eff
[K]
(c)
Observed
T
eff
Fig. 1. The temporal behavior of the South Pole stratosphere from May 2007 to April 2009 is compared to IceTop DOM counting rate and
the high energy muon rate in the deep ice. (a) The temperature profiles of the stratosphere at pressure layers from 20 hPa to 100 hPa where
the first cosmic ray interactions happen. (b) The IceTop DOM counting rate (black -observed, blue -after barometric correction) and the surface
pressure (orange). (c) The IceCube muon trigger rate and the calculated effective temperature (red).
is sparse during the winter when the balloons do not
reach high altitudes, and satellite based soundings fail
to return reliable data. For such periods NOAA derives
temperatures from their models. We utilize both the
ground-based data and satellite measurements/models
for our analysis.
A. Barometric effect
In first order approximation, the simple correlation
between log of rate change
∆{lnR}
and the surface
pressure change
∆P
is
∆{lnR} = β · ∆P
(1)
where
β
is the barometric coefficient.
As shown by the black line in the Figure 1b, the
observed IceTop DOM counting rate varies by
±10%
in
anti-correlation with surface pressure, and the barometric
coefficient is determined to be
β = −0.42
%/hPa. Using
this value, the pressure corrected scaler rate is plotted
as the smoother line (blue) in Figure 1b. The cosmic
ray shower rate detected by the IceTop array also varies
by
±17%
in anti-correlation with surface pressure, and
can be corrected with a
β
value of
−0.77
%/hPa. As
expected [3], the IceCube muon rate shown in Figure
1c is not correlated with surface pressure. However,
during exceptional stratospheric temperature changes,
the second order temperature effect on pressure becomes
large enough to cause anti-correlation of the high energy
muon rate with the barometric pressure. During such
events the effect directly reflects sudden stratospheric
density changes, specifically in the ozone layer.
B. Seasonal Temperature Modulation
Figure 1 clearly demonstrates the seasonal temper-
ature effect on the rates. The IceTop DOM counting
rate, after barometric correction, shows
±5%
negative
temperature correlation. On the other hand, the IceCube
muon rate is positively correlated with
±10%
seasonal
variation.
From the phenomenological studies [4][5], it is known
that correlation between temperature and muon intensity
can be described by the effective temperature
T
eff
,
defined by the weighted average of temperatures from
the surface to the top of the atmosphere.
T
eff
approxi-
mates the atmosphere as an isothermal body, weighting
each pressure layer according to its relevance to muon
production in atmosphere [5][6].
The variation of muon rate
∆R
µ
/ < R
µ
>
is related
to the effective temperature as
∆R
µ
<R
µ
>
= α
T
∆T
eff
<T
eff
>
,
(2)
where
α
T
is the atmospheric temperature coefficient.
Using balloon and satellite data for the South Pole
atmosphere, we calculated the effective temperature as
the red line in Figure 1c. We see that it traces the
IceCube muon rate remarkably well. The calculated
temperature coefficient
α
T
= 0.9
for the IceCube muon
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
TABLE I
Temperature and correlation coefficients of rates for different
stratospheric layers of 20-100 hPa and
T
eff
.
IceCube Muon
IceTop Count.
P
<T
p
>
α
p
T
γ
α
p
T
γ
(hP a)
(K)
20-30
214.0
0.512
0.953
-0.194
-0.834
30-40
208.7
0.550
0.986
-0.216
-0.906
40-50
207.3
0.591
0.993
-0.240
-0.946
50-60
206.6
0.627
0.985
-0.261
-0.968
60-70
206.3
0.656
0.971
-0.278
-0.975
70-80
206.3
0.679
0.954
-0.292
-0.975
80-100
206.5
0.708
0.927
-0.310
-0.971
< T
eff
>
α
T
γ
α
T
γ
211.3
0.901
0.990
-0.360
-0.969
rate agrees well with the expectations of models as well
as with other experimental measurements[7][8].
In this paper, we also study in detail the relation be-
tween rates and stratospheric temperatures for different
pressure layers from 20 hPa to 100 hPa as
∆R
<R>
= α
p
T
∆T
p
<T
p
>
.
(3)
The temperature coefficients for each pressure layer
α
p
T
and the correlation coefficient
γ
are determined from
regression analysis. Pressure-corrected IceTop DOM
counting rate, and IceCube muon rate are sorted in
bins of
∼
10 days, and deviations
∆R/ < R >
from
the average values are compared with the deviation of
temperatures at different depths
∆T
p
/ < T
p
>
.
We list the values of
α
p
T
and
γ
for IceCube muon
rate and IceTop DOM counting rate in Table 1. We
find that the IceCube muon rate correlates best with
the temperatures of 30-60 hPa pressure layers, while the
IceTop DOM counting rate shows the best correlation
with 60-80 hPa layers. In Figure 2. we plot the rate and
temperature correlation for layers which yield the best
correlation.
III. EXCEPTIONAL STRATOSPHERIC EVENTS AND
THE MUON RATES
The South Pole atmosphere is unique because of
the polar vortex. In winter a large-scale counter clock-
wise flowing cyclone forms over the entire continent
of Antarctica, isolating the Antarctic atmosphere from
higher latitudes. Stable heat loss due to radiative cooling
continues until August without much disruption, and the
powerful Antarctic vortex persists until the sunrise in
September. As warm air rushes in, the vortex loses its
strength, shrinks in size, and sometimes completely dis-
appears in austral summer. The density profile inside the
vortex changes abruptly during the sudden stratospheric
warming events, which eventually may cause the vortex
collapse. The ozone depleted layer at 14-21 km altitude
(ozone hole), observed in September/October period, is
usually replaced with ozone rich layer at 18-30 km soon
after the vortex breaks up.
-15
-10
-5
0
5
10
15
-20 -15 -10 -5 0 5 10 15 20
Δ
R / < R > [%]
Δ
T
p
/ < T
p
> [%]
IceCube muon rate vs. T
p
(p=40-50hPa)
α
= 0.591
γ
= 0.993
(a)
-15
-10
-5
0
5
10
15
-20 -15 -10 -5 0 5 10 15 20
Δ
R / < R > [%]
Δ
T
eff
/ < T
eff
> [%]
IceCube muon rate vs. T
eff
α
= 0.901
γ
= 0.990
(b)
-15
-10
-5
0
5
10
15
-20 -15 -10 -5 0 5 10 15 20
Δ
R / < R > [%]
Δ
T
p
/ < T
p
> [%]
IceTop DOM count. vs. T
p
(p=60-70hPa)
α
= -0.278
γ
= -0.975
(c)
-15
-10
-5
0
5
10
15
-20 -15 -10 -5 0 5 10 15 20
Δ
R / < R > [%]
Δ
T
eff
/ < T
eff
> [%]
IceTop DOM count. vs. T
eff
α
= -0.360
γ
= -0.969
(d)
Fig. 2. Correlation of IceCube muon and IceTop DOM counting rates
with stratospheric temperatures and
T
eff
. (a) IceCube muon rate vs.
temperature at 40-50 hPa pressure layer. (b) IceCube muon rate vs.
effective temperature. (c) IceTop DOM counting rate vs. temperature at
70-80 hPa pressure layer. (d) IceTop DOM counting rate vs. effective
temperature.
Apart from the slow seasonal temperature variations,
IceCube also probes the atmospheric density changes
due to the polar vortex dynamics and vigorous strato-
spheric temperature changes on time scales as short as
days or even hours, which are of great meteorological
interest.
An exceptional and so far unique stratospheric event
has already been observed in muon data taken with
IceCube’s predecessor AMANDA-II.
A. 2002 Antarctic ozone hole split detected by AMANDA
In late September 2002 the Antarctic stratosphere
underwent its first recorded major Sudden Stratospheric
Warming (SSW), during which the atmospheric temper-
atures increased by 40 to 60 K in less than a week.
This unprecedented event caused the polar vortex and
the ozone hole, normally centered above the South Pole,
to split into two smaller, separate off-center parts (Figure
3) [9].
Figure 4 shows the stratospheric temperatures between
September and October 2002 along with the AMANDA-
II muon rate. The muon rate traces temperature varia-
tions in the atmosphere in great detail, with the strongest
correlation observed for the 40-50 hPa layer.
B. South Pole atmosphere 2007-2008
Unlike in 2002, the stratospheric conditions over
Antarctica were closer to average in 2007 and 2008.
In 2007 the polar vortex was off-center from the South
Pole during most of September and October, resulting in
greater heat flux into the vortex, which decreased rapidly
in size. When it moved back over the colder Pole region
in early November it gained strength and persisted until
the beginning of December. The 2008 polar vortex was
4
ATMOSPHERICVARIATION
Fig. 3.
Ozone concentration over the southern hemisphere on
September 20th 2002 (left) and September 25th 2002 (right) [10].
Date
09/09 23/09 07/10 21/10
R
a
t
e
[
H
z
]
64
66
68
70
72
74
T
[
K
]
180
190
200
210
220
230
240
250
20 - 30 hPa
30 - 40 hPa
40 - 50 hPa
50 - 60 hPa
60 - 70 hPa
70 - 80 hPa
80 -100 hPa
Fig. 4. Average temperatures in various atmospheric layers over the
South Pole (top) and deep ice muon rate recorded with the AMANDA-
II detector (bottom) during the Antarctic ozone hole split of September-
October 2002.
one of the largest and strongest observed in the last 10
years over the South Pole. Because of this, the heat flux
entering the area was delayed by 20 days. The 2008
vortex broke the record in longevity by persisting well
into mid-December.
In Figure 5 we overlay the IceCube muon rate over
the temperature profiles of the Antarctic atmosphere pro-
duced by the NOAA Stratospheric analysis team[11]. We
note that the anomalous muon rates (see, for example,
the sudden increase by 3% on 6 August 2008) observed
by IceCube are in striking correlation with the middle
and lower stratospheric temperature anomalies.
We are establishing automated detection methods for
anomalous events in the South Pole atmosphere as well
as a detailed understanding and better modeling of
cosmic ray interactions during such stratospheric events.
IV. ACKNOWLEDGEMENTS
We are grateful to the South Pole Meteorology Office
and the Antarctic Meteorological Research Center of
the University of Wisconsin-Madison for providing the
Fig. 5.
The temperature time series of the Antarctic atmosphere
produced by NOAA[11] are shown for 2007 and 2008. The pattern
observed in the deep ice muon rate (black line) is superposed onto
the plot to display the striking correlation with the stratospheric
temperature anomalies.
meteorological data. This work is supported by the
National Science Foundation.
REFERENCES
[1] Karle, A. for the IceCube Collaboration (2008), IceCube: Con-
struction Status and First Results, arXiv:0812.3981v1.
[2] Clem, J. et al. (2008), Response of IceTop tanks to low-energy
particles, Proceedings of the 30th ICRC, Vol. 1 (SH), p.237-240.
[3] Dorman, L. (2004), Cosmic Rays in the Earth’s Atmosphere and
Underground, Springer Verlag.
[4] Barrett P. et al. (1952), Interpretation of cosmic-ray measurements
far underground, Reviews of Modern Physics, Vol. 24, Issue 3,
p.133-178.
[5] Ambrosio, M. et al. (1997), Seasonal variations in the underground
muon intensity as seen by MACRO, Astroparticle Physics, Vol. 7,
Issue 1-2, p.109-124.
[6] Gaisser, T. (1990), Cosmic rays and particle physics. Cambridge,
UK: Univ. Pr.
[7] Grashorn, E. et al., Observation of the seasonal variation in
underground muon intensity, Proceedings of the 30th ICRC, Vol.
5 (HE part 2), p.1233-1236.
[8] Osprey, S., et al. (2009), Sudden stratospheric warmings seen in
MINOS deep underground muon data, Geophys. Res. Lett., 36,
L05809, doi:10.1029/2008GL036359.
[9] Varotsos, C., et al. (2002), Environ. Sci. & Pollut. Res., p.375.
[10] http://ozonewatch.gsfc.nasa.gov
[11] http://www.cpc.noaa.gov/products/stratosphere/strat-trop/
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Supernova Search with the AMANDA / IceCube Detectors
Thomas Kowarik
∗
, Timo Griesel
∗
, Alexander Pie´gsa
∗
for the IceCube Collaboration
†
∗
Institute of Physics, University of Mainz, Staudinger Weg 7, D-55099 Mainz, Germany
†
see the special section of these proceedings
Abstract
. Since 1997 the neutrino telescope
AMANDA at the geographic South Pole has been
monitoring our Galaxy for neutrino bursts from su-
pernovae. Triggers were introduced in 2004 to submit
burst candidates to the Supernova Early Warning
System SNEWS. From 2007 the burst search was
extended to the much larger IceCube telescope, which
now supersedes AMANDA. By exploiting the low
photomultiplier noise in the antarctic ice (on average
280 Hz for IceCube), neutrino bursts from nearby
supernovae can be identified by the induced collective
rise in the pulse rates. Although only a counting
experiment, IceCube will provide the world’s most
precise measurement of the time profile of a neutrino
burst near the galactic center. The sensitivity to
neutrino properties such as the
θ
13
mixing angle and
the neutrino hierarchy are discussed as well as the
possibility to detect the deleptonization burst.
Keywords
: supernova neutrino IceCube
I. INTRODUCTION
Up to now, the only detected extra-terrestrial sources
of neutrinos are the sun and supernova SN1987A. To
extend the search to TeV energies and above, neutrino
telescopes such as AMANDA and IceCube [1] have been
built. It turns out that the noise rates of the light sensors
(OMs) in the antarctic ice are very low (
∼ 700 Hz
for AMANDA,
∼ 280 Hz
for IceCube) opening up
the possibility to detect MeV electron anti-neutrinos
from close supernovae by an increase in the collective
rate of all light sensors. The possibility to monitor the
galaxy for supernova with neutrino telescopes such as
AMANDA has first been proposed in [2] and a first
search has been performed using data from the years
1997 and 1998 [3].
The version of the the supernova data acquisition
(SNDAq) covered in this paper has been introduced for
AMANDA in the beginning of the year 2000 and was
extended to IceCube in 2007. The AMANDA SNDAq
has been switched off in February 2009. We will in-
vestigate data recorded by both telescopes concentrating
on the 9 years of AMANDA measurements and make
predictions for the expected sensitivity of IceCube.
II. DETECTORS AND DATA ACQUISITION
In AMANDA, the pulses of the 677 OMs are collected
in a VME/Linux based data acquisition system which
operates independently of the main data acquisition
aimed at high energy neutrinos. It counts pulses from
Rate / Hz
100 150 200 250 300 350 400 450
Number of Entries
1
10
10
2
10
3
10
4
10
5
Fig. 1. Rate distribution of a typical IceCube module
The recorded rate distribution (filled area) encompasses about 44
days and has been fitted with a Gaussian (solid line,
χ
2
/N
dof
=
56.5
) defined by
µ = 271 Hz
and
σ = 21 Hz
. However, a
lognormal function (dotted line,
χ
2
/N
dof
= 0.66
) with geomet-
ric mean
µ
geo
= 6.4 ln(Hz)
, geometric standard deviation
σ
geo
=
0.03 ln(Hz)
and a shift of
x
0
= 377 Hz
is much better at describing
the data.
every connected optical module in a 20 bit counter in
fixed
10ms
time intervals that are synchronized by a
GPS-clock.
In IceCube, PMT rates are recorded in a
1.6384ms
binning by scalers on each optical module. The infor-
mation is locally buffered and read out by the IceCube
data acquisition system. It then transfers this data to the
SNDAq, which synchronizes and regroups the informa-
tion in
2ms
bins.
The software used for data acquisition and analysis
is essentially the same for AMANDA and IceCube. The
data is rebinned in
500 ms
intervals and subjected to an
online analysis described later. In case of a significant
rate increase (“supernova trigger”), an alarm is sent to
the Supernova Early Warning System (SNEWS, [4]) via
the Iridium satellite network and the data is saved in a
fine time binning (
10ms
for AMANDA and
2ms
for
IceCube).
III. SENSOR RATES
In
500 ms
time binning, the pulse distribution of
the average AMANDA or IceCube OM conforms only
approximately to a Gaussian. It can more accurately be
described by a lognormal distribution (see figure 1).
The pulse distributions exhibit Poissonian and cor-
related afterpulse components contributing with similar
2
T.KOWARIK
et al.
SUPERNOVA SEARCH WITH ICECUBE
strengths. The correlated component is anticorrelated
with temperature and arises form Cherenkov light caused
by
40
K
decays and glass luminescence from radioactive
decay chains. A cut at
250 µs
on the time difference
of consecutive pulses effectively suppresses afterpulse
trains, improves the significance of a simulated super-
nova at
7.5kpc
by approximately
20%
and makes the
pulse distribution more Poissonian in nature.
IV. EFFECTIVE VOLUMES
Supernovae radiate all neutrino flavors, but due to the
relatively large inverse beta decay cross section, the main
signal in IceCube is induced by electron anti-neutrinos
(see [5]). The rate
R
per OM can be approximated
weighing the energy dependent anti-electron neutrino
flux at the earth
Φ(E
ν
¯
e
)
(derived from the neutrino
luminosity and spectra found in [6]) with the effective
area for anti-electron neutrino detection
A
eff
(E
ν
¯
e
)
and
integrating over the whole energy range:
R =
?
∞
0
dE
ν
¯
e
Φ(E
ν
¯
e
)A
eff
(E
ν
¯
e
) , with
A
eff
(E
ν
¯
e
) = n
?
∞
0
dE
e
+
dσ
dE
e
+
(E
ν
¯
e
, E
e
+
) V
eff,e
+
(E
e
+
) .
dσ
dE
e
+
(E
ν
¯
e
, E
e
+
)
is the inverse beta decay cross section,
n
the density of protons in the ice and
V
eff,e
+
(E
e
+
)
the
effective volume for positron detection of a single OM.
V
eff,e
+
(E
e
+
)
can be calculated by multiplying the
number of Cherenkov photons produced with the effec-
tive volume for photon detection
V
eff,γ
ch
.
By tracking Cherenkov photons in the antarctic ice
around the IceCube light sensors [7] and simulating
the module response one obtains
V
eff,γ
ch
= 0.104 m
3
for the most common AMANDA sensors and
V
eff,γ
=
0.182 m
3
for the IceCube sensors. With a
GEANT-4
simulation, the amount of photons produced by a
positron of an energy
E
e
+
can is estimated to be
N
γ
ch
= 270 E
e
+
/ MeV
. Consequently, the effective
volumes for positrons as a function of their energies
are
V
AMANDA
eff,e
+
= 19.5 E
e
+
m
3
MeV
and
V
IceCube
eff,e
+
=
34.2 E
e
+
m
3
MeV
. Uncertainties in the effective volumes
derive directly from the uncertainties of the ice models
(
∼ 5%
) and in the OM sensitivities (
∼ 10%
).
In this paper we assume a supernova neutrino pro-
duction according to the
Lawrence-Livermore
model [8]
as it is the only one that provides spectra for up to
15 s
. It gives a mean electron anti-neutrino energy of
15 MeV
corresponding to an average positron energy of
(13.4 ± 0.5) MeV
.
V. ANALYSIS PROCEDURE
A simple investigation of the rate sums would be
very susceptible to fluctuations due to variations in the
detector response or external influences such as the
seasonal variation of muon rates. Medium and long term
fluctuations are tracked by estimating the average count
rate by a sliding time window. The rate deviation for
a collective homogeneous neutrino induced ice illumi-
nation is calculated by a likelihood technique. In time
bins of
0.5 s
and longer, the pulse distributions can be
approximated by Gaussian distributions. For a collective
rate increase
∆µ
, the expectation value of the average
mean rate
µ
i
of a light sensor
i
with relative sensitivity
ǫ
i
increases to
µ
i
+ ǫ
i
∆µ
. The mean value
µ
i
and
its standard deviation
σ
i
are averaged over a sliding
window of
10 min
, excluding
15 s
before and after the
0.5 s
time frame
r
i
studied. By taking the product of
the corresponding Gaussian distributions the following
likelihood for a rate deviation
∆µ
is obtained:
L =
N
?
OM
i=1
√
1
2π σ
i
exp
(r
i
(µ
i
+ ǫ
i
∆µ))
2
2σ
2
i
.
Minimization of
ln L
leads to:
∆µ =
N
?
OM
i=1
ǫ
2
i
σ
2
i
?
1
?
?? ?
= σ
2
∆µ
N
?
OM
i=1
ǫ
i
(r
i
µ
i
)
σ
2
i
.
The data is analyzed in the three time binnings
0.5 s
,
4 s
and
10 s
for the following reasons: First, the finest time
binning accessible to the online analysis is
0.5 s
. Second,
as argued in [9], the neutrino measurements of SN1987A
are roughly compatible with an exponential decay of
τ = 3s
. The optimal time frame for the detection of
a signal with such a signature is
≈ 3.8s
. Last,
10s
are
the approximate time frame where most of the neutrinos
from SN1987A fell.
To ensure data quality, the optical modules are sub-
jected to careful quality checks and cleaning. Those
modules with rates outside of a predefined range, a high
dispersion w.r.t. the Poissonian expectation or a large
skewness are disqualified in real time.
Since SNEWS requests one alarm per 10 days, the
supernova trigger is set to
6.3 σ
.
To ensure that the observed rate deviation is homo-
geneous and isotropic, the following
χ
2
discriminant is
examined:
χ
2
(∆µ) =
N
?
OM
i=1
?
r
i
(µ
i
+ ǫ
i
∆µ)
σ
i
?
2
.
We demand the data to conform to a
χ
2
-confidence
level of
99.9 %
. However, it was found that the
χ
2
cannot clearly distinguish between isotropic rate changes
and fluctuations of a significant number of OMs: If
e.g.
20%
of the OMs record rate increases of about
1 σ
(
≈ 20 pulses/0.5 s
), the significance for isotropic
illumination can rise above
6 σ
without being rejected
by the
χ
2
condition. Still, no method was found that
performed better than
χ
2
.
VI. EXTERNAL PERTURBATIONS
Figure 2 shows the distribution of the significance
ξ =
∆µ
σ
∆µ
for AMANDA. From the central limit theorem, one
would expect a Gaussian distribution with a width of
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
Significance /
σ
−10
−5
0
5
10
Number of Entries
1
10
10
2
10
3
10
4
10
5
10
6
Fig. 2. Significances of AMANDA in
0.5 s
time frames
The filled area shows the significance of the data taken in 2002 with
a spread of
σ = 1.13
. The solid line denotes the initial background
simulation (
σ ≈ 1
) and the dotted line is a toy monte carlo taking
into account the fluctuations due to muon rates (
σ ≈ 1.11
).
σ = 1
. This expectation is supported by a background
simulation using lognormal representations of individual
light sensor pulse distributions. However, one finds that
the observable is spread wider than expected and exhibits
a minor shoulder at high significances.
As this observable is central to the identification of
supernovae, its detailed understanding is imperative and
a close investigation of these effects is necessary.
Faults in the software were ruled out by simulating
data at the most basic level, before entering the SNDAq.
Hardware faults are improbable, as the broadening of
the rates is seen both for AMANDA (
σ = 1.13
) and
IceCube (
σ = 1.27
) and the detectors use different and
independent power and readout electronics.
One can then check for external sources of rate
changes as tracked by magnetometers, rio- and pho-
tometers as well as seismometers at Pole. All have
been synchronized with the rate measurements of the
OMs. Only magnetic field variations show a slight,
albeit insignificant, influence on the rate deviation of
4·10
5
Hz
nT
for AMANDA. Due to a
µ
metal wire mesh
shielding, the influence on IceCube sensors is smaller by
a factor
∼ 30
.
It turns out that the main reason for the broadening
are fluctuations of the atmospheric muon rates. During
0.5 s
, AMANDA detects between
≈ 3.0 · 10
3
(June) and
≈ 3.4·10
3
(December) sensor hits due to muons. Adding
hits from atmospheric muons broadens and distorts the
distribution derived from noise rates. A simulation taking
into account the muon triggered PMT hits increases the
width of the significance distribution to
≈ 1.11
(see
figure 2). Although the investigations are still ongoing
for IceCube, we expect the larger
σ
can be ascribed to
its lower noise rate leading to a higher resolution and
thereby a stronger sensitivity to perturbations.
A Fourier transformation was performed on the data
stream. No evidence for periodically recurring events
was found, but occasional correlations between subse-
quent
0.5 s
time frames both in the summed noise rate
and the rate deviation could be identified. We determined
the mean significance before and after a rate increase
exceeding a predefined level. One observes a symmetric
correlation in the data mainly between
±10 s
. The origin
of the effect, which is present both in AMANDA and
IceCube data, is under investigation.
VII. EXPECTED SIGNAL AND VISIBILITY RANGE
A preliminary analysis of AMANDA data from 2000
to 2003 yielded a detection range of
R
4s
= 14.5kpc
at the optimal binning of
4s
at an efficiency of
90%
,
encompassing
81%
of the stars of our Galaxy. As signals
with exponentially decreasing luminosity (at
τ = 3 s
)
have been used as underlying models, the two other
binnings were less efficient with
R
0.5 s
= 10 kpc
(
56%
)
and
R
10 s
= 13 kpc
(
75%
), respectively.
The rates of the IceCube sensors are stable and
uniform across the detector. Scaling the observed rates
to the full 80 string detector, a summed rate of
?R
IceCube
? = (1.3· 10
6
± 1.8· 10
2
) Hz
is expected. While
a single DOM would only see an average rate increase of
13 Hz
or
0.65 σ
, the signal in the whole IceCube detector
would be
6.1 · 10
4
Hz
or
34 σ
for a supernova at
7.5kpc
distance. Using this simple counting method, IceCube
would see a supernova in the Magellanic Cloud with
5 σ
significance.
Figure 3 shows the expected signal in IceCube for
a supernova at
7.5 kPc
conforming to the
Lawrence-
Livermore
model with
≈ 10
6
registered neutrinos in
15s
and a statistical accuracy of
0.1%
in the first
2s
.
Assuming
2 · 10
4
events in Super-Kamiokande (scaled
from [12]), one arrives at an accuracy of
1%
in the same
time frame. While IceCube can neither determine the
directions nor the energies of the neutrinos, it will pro-
vide worlds best statistical accuracy to follow details of
Time after Core Bounce / s
−5
0
5
10
15
20
Summed Detector Hits
700
750
800
850
900
950
×10
3
Fig. 3. Expected signal of a supernova at
7.5 kpc
in IceCube.
The solid line denotes the signal expectation and the bars a randomized
simulation.
4
T.KOWARIK
et al.
SUPERNOVA SEARCH WITH ICECUBE
Distance / kpc
10
20
30
40
50
60
Significance
5
10
15
20
25
Fig. 4. Supernova detection ranges using
0.5 s
time frames
Significances which supernovae conforming to the
Lawrence-
Livermore
model would cause as function of their distance. IceCube
performance is described by the solid line, AMANDA by the dotted
line. Both are given for the
0.5 s
binning.
the neutrino light curve. Its performance in this respect
will be in the same order as proposed megaton proton
decay and supernova search experiments. As the signal
is seen on top of background noise, the measurement
accuracy drops rapidly with distance. Figure 4 shows the
significance at which IceCube and AMANDA would be
able to detect supernovae.
VIII. DELEPTONIZATION PEAK AND NEUTRINO
OSCILLATIONS
As mentioned before, the signal seen by AMANDA
and IceCube is mostly dominated by electron anti-
neutrinos with a small contribution by electron neutrino
scattering, thereby leading to a sensitivity which is
strongly dependent on the neutrino flavor and is thus
sensitive to neutrino oscillations.
As the neutrinos pass varying levels of density within
the supernova, the flux of electron and electron anti-
neutrinos
Φ
released is different from the initial flux
Φ
0
produced during the collapse:
Φ
ν
e
= p Φ
0
ν
e
+ (1 p)Φ
0
ν
x
,
Φ
ν
¯
e
= p¯Φ
0
ν
¯
e
+ (1 p¯) Φ
0
ν
¯
x
.
The survival probability
p/p¯
for the
ν
e
/ν¯
e
’s varies with
the mass hierarchy and the mixing angle
θ
13
[10]:
neutrino oscillation parameters
p
p¯
m
2
2
<m
2
3
, sin
2
θ
13
> 10
3
≈ 0%
69%
m
1
2
>m
2
3
< 0 , sin
2
θ
13
> 10
3
31%
≈ 0%
any hierarchy,
sin
2
θ
13
< 10
5
31%
69%
At the onset of the supernova neutrino burst during the
prompt shock, a
∼ 10 ms
long burst of electron neutrinos
gets emitted when the neutron star forms. As the shape
and rate of this burst is roughly independent of the
properties of the progenitor stars, it is considered as
Time after Core Bounce / s
−0.02 −0.01
0
0.01 0.02
0.03 0.04
0.05 0.06
0.07
Summed Detector Hits
2400
2600
2800
3000
3200
3400
3600
3800
deleptonization peak
Fig. 5. Neutrino signals of a supernova at
7.5 kpc
distance modified
by oscillations as seen in IceCube.
The line shows the expectation without neutrino oscillations, the
squares show the simulated signal for the normal mass hierarchy
with
sin
2
θ
13
> 10
3
(
sin
2
θ
13
< 10
5
lies nearly on the same
line) and the triangles show the signal for inverted mass hierarchy at
sin
2
θ
13
> 10
3
.
a standard candle, allowing one to determine neutrino
properties without knowing details of the core collapse.
The first
0.7 s
of a supernova signal were modeled af-
ter [11]. Figure 5 shows the expectation for a supernova
at a distance of
7.5 kpc
in the
2 ms
binning of IceCube.
Due to statistics and the rising background from the
starting electron anti-neutrino signal, the identification
of the neutronization burst is unlikely at this distance.
However, with a supernova at
7.5 kpc
one should be able
to draw conclusions for the mass hierarchy, depending
on the reliability of the models.
IX. CONCLUSIONS AND OUTLOOK
With 59 strings and 3540 OMs installed, IceCube
has reached
86%
of its final sensitivity for supernova
detection. It now supersedes AMANDA in the SNEWS
network. With the low energy extension DeepCore,
IceCube gains 360 additional OMs with a
∼ 30%
higher
quantum efficiency (rates
∼ 380 Hz
). These modules
have not been considered in this paper.
REFERENCES
[1] A. Karle et al.
arXiv:0812.3981v1
[2] F. Halzen, J. E. Jacobsen and E. Zas
Phys. Rev.
, D49:1758–1761,
1994
[3] J. Ahrens et al.
Astropart. Phys.
16:345-359, 2002
[4] P. Antonioli et al.
New J. Phys.
6:114, 2004
[5] W. C. Haxton.
Phys. Rev.
, D36:2283, 1987
[6] M. Th. Keil, G. G. Raffelt and H.-T. Janka.
Astrophys. J.
,
590:971–991, 2003
[7] M. Ackermann et al.
J. Geophys. Res.
, 111:D13203, 2006
[8] T. Totani, K. Sato, H. E. Dalhed and J. R. Wilson
Astrophys. J.
,
496:216–225, 1998
[9] J. F. Beacom and P. Vogel
Phys. Rev.
D60:033007, 1999
[10] A. Dighe.
Nucl. Phys. Proc. Suppl.
, 143:449–456, 2005
[11] F. S. Kitaura, Hans-Thomas Janka and W. Hillebrandt
Astron.
Astrophys.
450:345-350, 2006
[12] M. Ikeda et al.
Astrophys. J.
, 669:519-524, 2007
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
Physics Capabilities of the IceCube DeepCore Detector
Christopher Wiebusch
¤
for the IceCube Collaboration
y
¤
III.Physikalisches Institut, RWTH Aachen, University, Germany
y
See the special section of these proceedings
Abstract.
IceCube-DeepCore
is
a
compact
Cherenkov Detector located in the clear ice of the
bottom center of the IceCube Neutrino Telescope.
Its purpose is to enhance the sensitivity of IceCube
for low neutrino energies (< 1 TeV) and to lower
the detection threshold of IceCube by about an
order of magnitude to below 10 GeV. The detector is
formed by 6 additional strings of 360 high quantum
efficiency phototubes together with the 7 central
IceCube strings. The improved sensitivity will
provide an enhanced sensitivity to probe a range
of parameters of dark matter models not covered
by direct experiments. It opens a new window for
atmospheric neutrino oscillation measurements of
º
¹
disappearance or º
¿
appearance in an energy
region not well tested by previous experiments, and
enlarges the field of view of IceCube to a full sky
observation when searching for potential neutrino
sources. The first string was succesfully installed in
January 2009, commissioning of the full detector is
planned early 2010.
Keywords: Neutrino-astronomy, IceCube-DeepCore
I. INTRODUCTION
Main aim of the IceCube neutrino observatory [1] is
the detection of high energy extraterrestrial neutrinos
from cosmic sources, e.g. from active galactic nuclei.
The detection of high energy neutrinos would help to
resolve the question of the sources and the acceleration
mechanisms of high energy cosmic rays.
IceCube is located at the geographic South-Pole. The
main instrument of IceCube will consist of 80 cable
strings, each with 60 highly sensitive photo-detectors
which are installed in the clear ice at depths between
1450 m and 2450 m below the surface. Charged leptons
with an energy above 100 GeV inside or close to the
detector produce enough Cherenkov light to be de-
tected and reconstructed using the timing information of
the photoelectrons recorded with large area phototubes.
While the primary goal is of highest scientific interest,
the instrument can address a multitude of scientific
questions, ranging from fundamental physics such as
physics on energy-scales beyond the reach of current
particle accelerators to multidisciplinary aspects e.g. the
optical properties of the deep Antarctic ice which reflect
climate changes on Earth.
IceCube is complemented by other major detector
components. The surface air-shower detector IceTop is
used to study high energy cosmic rays and to calibrate
?
?
????????
????????????
????????????
????????
????????
???????????
???????????
??????????
??????? ??
??????
????????
??????????????????
??????
????????????????????????? ?
????????????
Fig. 1.
Geometry of the DeepCore Detector. The top part shows
the surface projection of horizontal string positions and indicates the
positions of AMANDA and DeepCore. The bottom part indicates the
depth of sensor positions. At the left the depth-profile of the optical
transparency of the ice is shown.
IceCube. R&D studies are underway to supplement Ice-
Cube with radio (AURA) and acoustic sensors (SPATS)
in order to extend the energy range beyond EeV en-
ergies. Six additional and more densely instrumented
strings will be deployed in the bottom center of the Ice-
Cube detector and form the here considered DeepCore
detector.
A first DeepCore string has been succesfully installed
in January 2009 and is taking data since then. The
DeepCore detector will be completed in 2010 and will
replace the existing AMANDA-II detector, which has
been decomissioned in May 2009. DeepCore will lower
the detection threshold of IceCube by an order of mag-
nitude to below 10 GeV and, due to its improved design,
provide new capabilities compared to AMANDA. In
this paper we describe the design of DeepCore and the
enhanced physics capabilities which can be addressed.
II. DEEPCORE DESIGN AND GEOMETRY
The geometry of DeepCore is sketched in figure 1.
The detector consists out of 6 additional strings of
2
CHRISTOPHER WIEBUSCH et al. CAPABILITIES OF ICECUBE DEEPCORE
0
10
20
30
40
0
40
80
120
Optical Efficiency (%)
DOMs
p
y( )
2
High QE
Standard IceCube
Fig. 2. Results of the quantum efficiency calibration at ¸ = 405 nm
of the DeepCore phototubes compared to standard IceCube phototubes.
60 phototubes each together with the 7 central IceCube
strings. The detector is divided into two components:
Ten sensors of each new string are at shallow depths
between 1750 m and 1850 m, above a major dust-layer
of poorer optical transparency and will be used as a veto-
detector for the deeper component. The deep component
is formed by 50 sensors on each string and is installed
in the clear ice at depths between 2100 m and 2450 m.
It will form, together with the neighbouring IceCube
sensors the main physics volume.
The deep ice is on average twice as clear as the
average ice above 2000 m [2]. The effective scattering
length reaches 50 m and the absorption length 230 m.
Compared to AMANDA a substantially larger number
of unscattered photons will be recorded allowing for
an improved pattern recognition and reconstruction of
neutrino events in particular at lower energies.
Another important aspect is a denser spacing of photo-
sensors compared to IceCube: The horizontal inter-string
spacing is 72 m (IceCube: 125 m). The vertical spacing
of sensors along a string is only 7 m (IceCube: 17 m).
The next major improvement with respect to IceCube
and AMANDA is the usage of new phototubes (HAMA-
MATSU R7081-MOD) of higher quantum efficiency.
This hemispherical 10” photomultiplier is identical to
the standard IceCube PMT [3], but employs a modified
cathode material of higher quantum efficiency (typically
33 % at ¸ = 390 nm). Calibrations of the phototubes for
DeepCore confirm a sensitivity improvement of 30 %-
40 % with respect to the standard IceCube PMT (figure
2). Also regular IceCube strings will be equipped with
these phototubes if within the DeepCore volume.
The net effect of the denser instrumentation is a factor
» 6 gain in sensitivity for photon detection and superior
optical clarity of the ice. This is an imporant prerequisite
for a substantially lower detection threshold.
III. DEEPCORE PERFORMANCE
The electronic hardware of the optical sensors is
identical to the standard IceCube module [3] and this
significantly reduces the efforts for maintenance and
operations compared to AMANDA. The DeepCore de-
tector is integrated into a homogeneous data aquisition
model of IceCube which will be only supplemented
by additional trigger. Initial comissioning data of the
first installed DeepCore string verifies that the hardware
works reliably and as expected.
The IceCube detector is triggered if typically a mul-
tiplicity of 8 sensors within » 5 ¹s observe a signal
coincident with a hit in a neighbouring or next to
neighbouring sensor. For each trigger, the signals of the
full detector are transferred to the surface.
For the sensors within the considered volume the
data taking is supplemented with a reduced multiplicity
requirement of typically 3¡4. As shown in figure 7, such
a trigger is sufficient to trigger atmospheric neutrino
events down to a threshold of 1 GeV, sufficiently below
the anticipated physics threshold.
The chosen location of DeepCore allows to utilize the
outer IceCube detector as an active veto shield against
the background of down-going atmospheric muons.
These are detected at a » 10
6
higher rate than neutrino
induced muons. The veto provides external informa-
tion to suppress this background and standard up-going
neutrino searches will strongly benefit from a larger
signal efficiency and a lower detection threshold as the
demands on the maturity of recorded signals decrease.
Even more intriguing is the opportunity to identify
down-going º induced ¹, which may, unlike cosmic ray
induced atmosperic ¹, start inside the DeepCore detec-
tor. Simulations [4] show that three rings of surrounding
IceCube strings and the instrumentation in the upper
part of IceCube are sufficient to achieve a rejection of
atmospheric muons by a factor > 10
6
maintaining a
large fraction of the triggered neutrino signals. A further
interesting aspect is the proposal in [6] to veto also atmo-
spheric º by the detection of a correlated atmospheric ¹.
This could provide the opportunity to reject a substantial
part of this usually irreducable background for extra-
terrestrial neutrino searches.
Triggered events which start inside the detector will
be selected online and transmitted north by satellite.
Already simple algorithms allow to suppress the back-
ground rate by a factor > 10
3
and meet the bandwidth
requirements while keeping 90% of the signal [4]. A
typical strategy requires that the earliest hits are located
inside DeepCore and allows for later hits in the veto-
region only if the time is causal consistent with the
hypothesis of a starting track. A filter which selects
starting tracks in IceCube is active since 2008 and
allowed to verify the performance of such filters with
experimental data and to benchmark the subsequent
physics analysis.
The filtered events are analyzed offline with more
sophisticated reconstruction algorithms. Here, the fo-
cus is to improve the purity of the sample and to
reconstruct direction, energy and the position of the
interaction vertex. A particularily efficient likelihood
algorithm (finiteReco [4]) capable of selecting starting
muons evaluates the hit probabilities of photomultipliers
with and without a signal in dependence of the distance
to the track. It estimates the most probable position of
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
E
ν
[GeV]
0
20
40
60
80
100
120
140
160
180
200
220
[m]
Reco
L
0
50
100
150
200
250
300
350
400
450
Entries 642
Mean x 57.25
Mean y
142.1
RMS x 46.98
RMS y
101.9
Entries 642
Mean x 57.25
Mean y
142.1
RMS x 46.98
RMS y
101.9
preliminary
Fig. 3. The reconstructed length of ¹ contained tracks in DeepCore,
based on the reconstructed start- and stop- vertex with the finiteReco
algorithm. The data are º induced ¹-tracks from the upper hemisphere,
which are reconstructed to start within DeepCore.
10
(Primary Neutrino Energy - GeV)
log
1
1.2
1.4
1.6
1.8
2
2.2
2.4
2.6
2.8
3
)
2
Effective Neutrino Area (m
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
1
preliminary
Fig. 4. Effective neutrino detection area of IceCube (trigger level)
versus the energy for up-going neutrinos. The squares are IceCube
only. The circles represent the area if DeepCore is included.
the start-vertex and provides the probablity that a track
may have reached this point undetected by the veto.
The reconstruction algorithms are still under develop-
ment but initial results are promising. As an example,
figure 3 shows the reconstructed length of ¹ tracks as
function of the º energy. Already the currently achieved
resolution of » 50 m results in a visible correlation with
the neutrino energy in particular for energies . 100 GeV.
Note, that the resolution is substantially better for verti-
cal tracks.
The effective detection area of IceCube for neutrinos
for triggered events is shown in figure 4. Despite Deep-
Core being much smaller than IceCube, a substantial
gain of up to an order of magnitude is achieved by
the additional events detected in DeepCore. Higher level
event selections for specific physics analysis benefit
strongly from the higher information content of events
and the gain of DeepCore further improves.
IV. PHYSICS POTENTIAL
A. Galactic point sources of neutrinos
The analysis of IceCube data greatly benefits from
the location at the geographic South Pole because the
celestial sphere fully rotates during one sideral day.
Azimuthal detector effects are largly washed out because
Fig. 5.
Interesting celestrial object with known emission of TeV
gamma rays.
each portion of the sky is observed with the same expo-
sure and same inclination. However, the aperture of the
conventional up-going muon analysis is restricted to only
the Northern hemisphere and leaves out a large fraction
of the galactic plane and a number of interesting objects
such as the galactic center (see figure 5). Extending the
field of view of IceCube at low energies (. 1 TeV) to a
full sky observation will greatly enlarge the number of
interesting galactic sources in reach of IceCube
1
.
The energy spectrum of gamma rays from supernova
remnants show indications of a potential cut-off at a few
TeV [7]. Under the assumption of a hadronic production
mechanism for these gamma rays the corresponding
neutrino fluxes would show a similar cut-off at typ-
ically half of that cut-off value. The high sensitivity
of DeepCore for neutrinos of TeV energies and below
will complement the sensitivity of IceCube which is
optimized for energies of typically 10 TeV and above.
B. Indirect detection of dark matter
The observation of an excess of high energy neutrinos
from the direction of the Sun can be interpreted by
means of annihilations of WIMP-dark matter in its
center. The energy of such neutrinos is a fraction of the
mass of the WIMP particles (expected on the TeV-scale)
and it depends on the decay chains of the annihilation
products. The large effective area of DeepCore and the
possibility of a highly efficient signal selection greatly
improves the sensitivity of IceCube. In particular it is
possible to probe regions of the parameter space with
soft decay chains and WIMP masses below » 200
GeV and which are not disfavored by direct search
experiments.
An example of the sensitivity for the hard annihilation
channel of supersymmetric neutralino dark matter is
1
Note, that at high energies > 1 PeV the background of atmospheric
muons rapidly decreases and also here neutrinos from the Southern
hemisphere can be detected by IceCube [8]. However, galactic sources
are usually not expected to produce significant fluxes of neutrinos at
energies around the cosmic ray knee and above.
4
CHRISTOPHER WIEBUSCH et al. CAPABILITIES OF ICECUBE DEEPCORE
Fig. 6.
The expected upper limit of IceCube DeepCore at 90%
confidence level on the spin-dependent neutralino-proton cross section
for the hard (W+W
¡
) annihilation channel as a function of the
neutralino mass for IceCube including Deep Core (solid line). Also
shown are limits from previous direct and indirect searches. The shaded
areas represent MSSM models which are not disfavoured by direct
searches, even if their sensitivity would be improved by a factor 1000.
Primary Neutrino Energy - GeV
10
20
30
40
50
60
70
80
90
100
0
500
1000
1500
2000
2500
3000
3500
4000
Preliminary
Fig. 7.
Number of triggered vertical atmospheric neutrinos per
year (per 3GeV) versus the neutrino energy. Events from 1:6¼ sr are
accepted. Shown are the numbers without (squares) and with (circles)
the inclusion of oscillations (¢m
2
atm
= 0:0024 eV2, sin(2μ
23
) = 1).
shown in figure 6.
C. Atmospheric neutrinos
DeepCore will trigger on the order of 10
5
atmo-
spheric neutrinos/year in the energy range from 1 GeV to
100 GeV. Atmospheric neutrinos are largly unexplored
in this energy range. Smaller experiments like Super-
Kamiokande cannot efficiently measure the spectrum
for energies above 10 GeV and measurements done by
AMANDA only start at 1 TeV. In the range between
30 ¡ 50 GeV decays of charged kaons become dominant
over decays of charged pions [10] for the production of
atmospheric neutrinos and the systematic error of flux
calculations increases. A measurement of this transition
could help to reduce systematic errors of the flux of
atmospheric neutrinos at TeV energies.
The first maximum of disappearence of atmospheric
º
¹
due to oscillations appears at an energy of about
25 GeV for vertically up-going atmospheric neutrinos
[5]. The energy threshold of about 10 GeV would allow
to measure atmospheric neutrino oscillations by means
of a direct observation of the oscillation pattern in
this energy range. In addition, DeepCore would aim to
observe the appearance of º
¿
by the detection of small
cascade-like events in the DeepCore volume at a rate
which is anti-correlated with the disappearance of º
¹
.
Similar to º
¿
, the signature of º
e
events are cascade-
events with a large local light deposition without the
signature of a track. The dominant background to these
events are charged current º
¹
interactions with a small
momentum transfer to the ¹. Analyses like these will
have to be performed considering all three flavors and
their mixing. Note, that only for a further reduction of
the energy threshold smaller than 10 GeV matter effects
in the Earth’s core would become visible [5].
D. Other physics aspects
Two remaining items are only briefly mentioned here.
Slowly moving magnetic monopoles, when catalizing
proton decays, produce subsequent energy depositions
of » 1 GeV along their path with time-scales of ¹s to
ms. Initial studies are under-way to develop a dedicated
trigger for this signature using delayed coincidences.
DeepCore extends the possibility to search for neu-
trino emission in coincidence with gamma ray bursts
(GRB) to lower energies. According to [11] GRB may
emit a burst of neutrinos. However, predicted energies
are only a few GeV and the event numbers are small
(» 10 yr
¡1
km
¡2
). Additional studies are required to
evaluate the sensitivity for such signals.
V. SUMMARY AND OUTLOOK
This paper summarizes the enhancement of the
physics profile of IceCube by the DeepCore detector.
The geometry of DeepCore has been optimized and
construction has started. Detailed MC studies and ex-
perimental analyses are currently under way to optimize
and finalize the analysis procedures. First data from the
full detector will be available in spring 2010, the veto
will be fully completed latest 2011.
ACKNOWLEDGEMENT
This work is supported by the German Ministry for
Education and Research (BMBF). For a full acknowl-
edgement see [1].
REFERENCES
[1] J. Ahrens et al. (IceCube Collaboration), Astropart. Phys. 20
(2004) 507-532, 2004
[2] M. Ackermann et al. (IceCube Collaboration), J. of Geophys.
Res. 111 (2006) D13203, July 2006
[3] R. G. Stokstad et al. (IceCube Collaboration), Nucl. Phys. B
(Proc. Suppl.) 118 (2003) 514.
[4] O. Schulz et al. (IceCube Collaboration), these proceedings.
[5] D. Grant et al. (IceCube Collaboration), these proceedings.
[6] S. Schonert¨
et al., Phys. Rev. D 79, 043009 (2009)
[7] F. Aharonian et al. (HESS Collaboration), Astron. Astrophys. 464,
235-243 (2007)
[8] J. Dumm et al. (IceCube Collaboration), these proceedings.
[9] M. Danninger, priv. Comm., see also R. Abbasi et al. (IceCube
Collaboration), arXiv:0902.2460, accept. by Phys. Rev. Lett..
[10] see e.g. Athar, PhysRev.D 71, 103008 (2005)
[11] J. Bahcall,&,P. Meszaros, Phys. Rev. Lett. 85, 1362-1365 (2000).
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Fundamental Neutrino Measurements with IceCube DeepCore
Darren Grant
∗
, D. Jason Koskinen
∗
, and Carsten Rott
†
for the IceCube collaboration.
∗
Dept. of Physics, Pennsylvania State University, University Park, PA 16802, USA
†
Dept. of Physics and Center for Cosmology and Astro-Particle Physics, Ohio State University, Columbus, OH 43210, USA
1
Abstract
. The recent deployment of the first string
2
of DeepCore, a low-energy extension of the Ice-
3
Cube neutrino observatory, offers new opportunities
4
for fundamental neutrino physics using atmospheric
5
neutrinos. The energy reach of DeepCore, down to
6
∼ 10
GeV, will allow measurements of atmospheric
7
muon neutrino disappearance at a higher energy
8
regime than any past or current experiment. In
9
addition to a disappearance measurement, a flavor-
10
independent statistical analysis of cascade-like events
11
opens the door for the measurement of tau neutrino
12
appearance via a measurable excess of cascade-like
13
events. In the event of a relatively large value of
14
sin
2
2θ
13
, a multi-year measurement of the suppres-
15
sion of muon neutrino disappearance due to earth
16
matter effects may show a measurable dependence on
17
the sign of the mass hierarchy (normal vs. inverted).
18
19
Keywords
: Oscillations, DeepCore, hierarchy
20
I. ICECUBE DEEPCORE
21
The IceCube neutrino telescope is a multipurpose
22
discovery detector under construction at the South Pole,
23
which is currently about three quarters completed [1].
24
After completion in 2011, IceCube will have instru-
25
mented a volume of approximately one cubic kilometer
26
utilizing 86 strings, each instrumented with 60 Digital
27
Optical Modules (DOMs) at a depth between 1450 m
28
and 2450 m. Eighty of these strings (the baseline design)
29
will be arranged in a hexagonal pattern with an inter-
30
string spacing of about 125 m and with 17 m verti-
31
cal separation between DOMs. This baseline design is
32
complemented by six more strings, that form a more
33
densely instrumented sub-array, located at the center of
34
IceCube. These strings will be spaced in between the
35
regular strings, so that an interstring-spacing of 72 m
36
is achieved. Together with the seven adjacent standard
37
IceCube strings these six strings form the DeepCore
38
array in the center of IceCube (shown in Figure 1).
39
DeepCore strings have a different distribution of the
40
60 DOMs on them. Fifty out of the 60 DOMs on a
41
DeepCore string will be installed in the deep clear ice,
42
below an ice-layer of short absorption length (1970 -
43
2100 m) also labeled ”dust-layer”. The top 10 DOMs on
44
the DeepCore strings will be deployed above this dust-
45
layer. Those DOMs add to the effective veto capability
46
of the surrounding IceCube strings against down-going
47
muons.
Fig. 1: IceCube with the DeepCore sub-detector in the
center deep clear ice. The illustration on the left shows
the depth-profile of the optical transparency of the ice.
48
The DeepCore extension will significantly improve
49
IceCube’s low energy performance and allow neutrino
50
detection to approximately 10 GeV (see Figure 2). This
51
is accomplished by having DeepCore strings with a
52
dense vertical spacing of 7 m between DOMs, which are
53
deployed in the deepest ice where the scattering length is
54
approximately twice that compared to the upper part of
55
the IceCube detector [2]. Coupled with the spacing and
56
ice clarity the DOMs themselves are instrumented with
57
high quantum efficiency photomultiplier tubes (HQE
58
PMTs), that have a
40%
efficiency increase, wavelength
59
dependent, compared to regular IceCube PMTs [3]. The
60
aforementioned properties make the DeepCore an ideal
61
detector for low energy-low rate neutrino physics.
2
D. GRANT
et al.
FUNDAMENTAL NEUTRINO MEASUREMENTS
(Primary Neutrino Energy - GeV)
10
log
1
1.2
1.4
1.6
1.8
2
2.2
2.4
2.6
2.8
3
)
2
Effective Neutrino Area (m
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
1
10
2
10
Preliminary
Fig. 2: Comparison of preliminary study of effective area
A
eff
at trigger level for the 80 IceCube string array
without DeepCore (squares) and in addition with the
six DeepCore strings (open circles). The addition of
DeepCore increases the effective area of the detector at
low energies significantly.
5
10
15
20
25
30
35
40
45
50
E
ν
[GeV]
0
0.2
0.4
0.6
0.8
1
Oscillation probabilities
ν
μ
− ν
μ
ν
μ
− ν
τ
Fig. 3:
ν
µ
survival probability and
ν
µ
→
ν
τ
oscillation
probability for vertically upward going neutrinos, where
sin
2
2
θ
13
=0.1 [6].
62
II. DEEPCORE NEUTRINO OSCILLATION PHYSICS
63
The lower energy reach achieved with the DeepCore
64
opens the possibility to investigate atmospheric neutrino
65
oscillations in the primarily unexplored energy regime
66
of a tens of GeV. In Figure 3 we show the expected
67
ν
µ
survival probability and
ν
µ
→
ν
τ
oscillation curves
68
for which the DeepCore will have sensitivity and re-
69
late directly to the potential measurements discussed in
70
the following subsections. In addition to the improved
71
energy reach, part of what make such measurements
72
possible is the innate background rejection built into
73
the DeepCore design: increased overburden reduces the
74
number of atmospheric muons and the surrounding
75
IceCube strings provide an in situ veto. Simple veto
76
methods have achieved background reductions of four
77
orders of magnitude with excellent signal retention and
78
have potential for greater than 6 orders of magnitude
79
rejection utilizing reconstruction veto methods [3].
80
A. Muon-Neutrino Disappearance
81
Previous measurements of neutrino oscillations at the
82
atmospheric-scale have been significantly decreased in
83
both energy reach and active volume size of the detectors
84
compared to DeepCore. With an approximate 13 MT
85
fiducial volume, DeepCore has the capacity to make a
86
precision measurement of atmospheric neutrino oscilla-
87
tions above 10 GeV [4]. The
ν
µ
survival probability
88
curve shown in Figure 3 illustrates an expectation for
89
a significant deficit in neutrino flux, shown in Fig.
90
4 at a previously unexplored energy region. An issue
91
associated with such an analysis is the angular resolution
92
of neutrino induced muon tracks at these energies is
93
fundamentally limited by the kinematics of the neutrino-
94
nucleon interaction. Low energy
ν
µ
interactions have
95
a much bigger opening angle between the incoming
96
neutrino and outgoing muon than high energy interac-
97
tions, which leads the muon to have a higher probability
98
of being noncollinear with the incoming neutrino. The
99
intrinsic uncertainty on the opening angle reduces any
100
experiments ability to identify perfectly upward going
101
neutrinos, where the uncertainty can be approximated,
in the full data sample, by
∆φ ≃ 30
◦
×
?
102
(GeV)/E
ν
µ
.
103
However, oscillations can be observed with very high
104
significance with an inclusive measurement over the
105
zenith angle range
1.0 < cos φ < 0.6
, and in-
106
corporation of angular dependence will only improve
107
the result. Fig. 4 shows a simulation of this muon
108
disappearance effect, which would be an approximate 20
109
sigma effect with just one year of IceCube DeepCore
110
data. The current study only discusses the effect on
111
the signal, taking statistical uncertainties into account.
112
Systematic uncertainties remain to be studied, as well
113
as the background prediction to this measurement.
114
B. Tau-Neutrino Appearance
115
Returning to Figure 3 and looking at the
ν
µ
→
ν
τ
116
oscillation curve, we expect a fraction of the incoming
117
atmospheric neutrino flux to have a
ν
τ
component.
118
Given the higher parent
ν
µ
flux and different decay
119
kinematics of tau events relative to that of
ν
e
charge-
120
current (CC) and
ν
x
neutral current (NC) (x=e,
µ
,
τ
)
121
events, we should be able to detect
ν
τ
both via the
122
excess of cascade (hadronic and electromagnetic shower)
123
events and possibly through the resulting spectral energy
124
distortion. This measurement would not only represent
125
the largest sample of tau neutrinos ever collected (albeit
126
inclusively), it may also be competitive with OPERA [5]
127
in making an appearance measurement of tau neutrinos
128
due to oscillations.
129
Future components of this analysis comprise devel-
130
oping a dedicated energy reconstruction for low energy
131
cascade events as well as examining the background
132
caused by short muon tracks that mimic cascades as well
133
as the impact of misidentification of up versus down
134
going neutrinos. Good energy resolution will increase
135
sensitivity to
ν
τ
appearance over regions where neutrino
136
oscillations are maximal.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
NChannel
0
10
20
30
40
50
60
70
80
90
100
Events/NChannel/Year
0
200
400
600
800
1000
1200
1400
1600
1800
2000
Preliminary
Fig. 4: Simulated
ν
µ
disappearance with 1 year of
DeepCore data. The lower curve (open circles) show the
number of upward-going muons observed over the zenith
angle cos
φ <
-0.6 with oscillations, while the upper
curve (squares) is the corollary without oscillations. No
systematic errors or background have been included.
The effect is approximately 20 sigma based purely on
statistical errors (approximately 28000 events per year
without oscillations and 16000 events with oscillations
assuming
sin
2
2
θ
23
=1.0 and
∆
m
2
23
=0.0024). Note that
NChannel is a crude energy estimator for the detector
based on the number of hit DOMs.
137
C. Matter Effects and Neutrino Mass Hierarchy
138
Depending on detection efficiency and purity, a suf-
139
ficiently large value of
sin
2
θ
13
[6], and control of
140
systematics the DeepCore detector may be used in an
141
ambitious multi-year effort to determine the sign of
142
the neutrino mass hierarchy. This may be accomplished
143
by measuring a small enhancement/supression from the
144
MSW effect [11] of the expected number of
ν
µ
events.
145
The
ν
µ
oscillation probability, shown in Figure 5, indi-
146
cates that the neutrino rate over the 8-25 GeV energy
147
region is enhanced for the normal hierarchy (NH),
148
and enhanced for anti-neutrinos for the inverted mass
149
hierarchy (IH). A complication of this measurement is
150
that DeepCore cannot distinguish between neutrino and
151
anti-neutrinos, however at the relevant energy range of
152
10 GeV
< E
ν
<
30 GeV the interation cross-section
153
between neutrinos and anti-neutrinos differs by a factor
154
of two;
σ(ν
x
) ≃ 2σ(ν¯
x
)
. The difference in interaction
155
cross-section translates into a difference in the number
156
of observed muon neutrino candidate events. Based on
157
statistical discrimination only, it may be possible to
158
distinguish normal from inverted hierarchies. In Fig. 6
159
we show the expected results from 5 years of DeepCore
160
data, with a statistical separation between the normal
161
and inverted hierarchies of approximately 10 sigma. The
162
described effect is pending on a sufficiently large value
163
of
θ
13
(that is expected to be measured by the time of the
164
data for this measurement has been obtained). Further
165
the signal systematic uncertainties need to be sufficiently
166
small and remaining background need to be effectively
167
removed.
168
III. CONCLUSIONS
169
We have completed a full Monte Carlo study of the
170
IceCube DeepCore detector that shows the potential
171
for measurements of fundamental neutrino properties.
172
We have discussed the expected effects on the signal
173
for
ν
µ
disappearance at energies higher than previously
174
measured, a measurement of
ν
τ
appearance as well as
175
resolution of the neutrino mass hierarchy.
176
REFERENCES
177
[1] A. Achterberg
et al.
, Astropart. Phys.
26
, 155 (2006).
178
[2] M. Ackermann
et al.
, J. Geophys. Res.
111
, 02201 (2006).
179
[3] D. Cowen, for the IceCube coll., NUTEL 09, forthcoming.
180
[4] Y. Fukuda
et al.
[Super-Kamiokande Collaboration], Phys. Rev.
181
Lett.
81
, 1562 (1998) [arXiv:hep-ex/9807003].
182
[5] R. Acquafredda
et al.
[OPERA Collaboration], New J. Phys.
8
,
183
303 (2006) [arXiv:hep-ex/0611023].
184
[6] O. Mena, I. Mocioiu and S. Razzaque, Phys. Rev. D
78
, 093003
185
(2008)
186
[7] M. Sanchez [MINOS Collaboration], Moriond EW 2009, forth-
187
coming.
188
[8] H. L. Ge, C. Giunti and Q. Y. Liu, arXiv:0810.5443 [hep-ph].
189
[9] G. L. Fogli, E. Lisi, A. Marrone, A. Palazzo and A. M. Rotunno,
190
Phys. Rev. Lett.
101
, 141801 (2008) [arXiv:0806.2649 [hep-ph]].
191
[10] M. Apollonio
et al.
[CHOOZ Collaboration], Eur. Phys. J. C
27
,
192
331 (2003) [arXiv:hep-ex/0301017].
193
[11] E. K. Akhmedov, M. Maltoni and A. Y. Smirnov, JHEP
0705
,
194
077 (2007)
195
[12] C. Rott
et al.
for the IceCube coll., these proceedings.
196
[13] D. Grant
et al.
(DeepCore reconstruction) for the IceCube coll.,
197
these proceedings.
198
[14] C. Wiebusch
et al.
(DeepCore) for the IceCube coll., these
199
proceedings.
200
[15] R. Abbasi
et al.
[IceCube Collaboration], Nucl. Instrum. Meth.
201
A
601
, 294 (2009)
202
[16] E. Resconi for the IceCube coll., astro-ph/0807.3891.
4
D. GRANT
et al.
FUNDAMENTAL NEUTRINO MEASUREMENTS
5
10
15
20
25
E
ν
[GeV]
0
0.2
0.4
0.6
0.8
1
P(
ν
μ
− ν
μ
)
sin
2
2
θ
13
= 0.1 (ΝΗ)
sin
2
2θ
13
= 0.1 (ΙΗ)
sin
2
2θ
13
= 0.06 (ΝΗ)
sin
2
2θ
13
= 0.06 (ΙΗ)
Fig. 5: Oscillation probabilities for
ν
µ
→
ν
µ
transitions for upward going neutrinos [6]. For a value of
sin
2
2θ
13
=0.1, the difference in the survival probability between the normal hierarchy (solid black line) and inverted
hierarchy (dashed red line) is
≈
7%, and increases for higher values of
sin
2
2θ
13
. Recent measurements [7] as well
as global fits [8], [9] prefer that
sin
2
2θ
13
is non-zero, while the value of 0.10 reflects the 90% Confidence Limit
set by CHOOZ [10].
Energy of detected muon - GeV
5
10
15
20
25
30
35
40
45
50
Events/5 GeV
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
Preliminary
Fig. 6: Predicted rate with 5 years of data for normal
(squares) and inverted (open circles) hierarchy, for
ν
µ
induced muon tracks within 45 degrees of vertical that
start within the DeepCore fiducial volume. We find that
in the first two bins the rate for the inverted hierarchy
is above that for the normal hierarchy and that in the
remaining bins the rates overlap. Systematic errors are
not yet estimated. Statistical errors are too small to be
visible. Note that
sin
2
θ
13
=0.1 in the presented case.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Implementation of an active veto against atmospheric muons in
IceCube DeepCore
Olaf Schulz
∗
, Sebastian Euler
†
and Darren Grant
‡
for the IceCube Collaboration
§
∗
Max-Planck Institut fu¨r Kernphysik, Saupfercheckweg 1, 69171 Heidelberg, Germany
†
III. Physikalisches Institut, RWTH Aachen University, 52056 Aachen, Germany
‡
Dept. of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802, USA
§
See the special section of these proceedings
Abstract
. The IceCube DeepCore [1] has been
designed to lower the energy threshold and broaden
the physics capabilities of the IceCube Neutrino
Observatory. A crucial part of the new opportunities
provided by DeepCore is offered by the possibility to
reject the background of atmospheric muons. This
can be done by using the large instrumented volume
of the standard IceCube configuration around Deep-
Core as an active veto region. By thus restricting
the expected signal to those neutrino events with an
interaction vertex inside the central DeepCore region,
it is possible to look for neutrinos from all directions,
including the Southern Hemisphere that was previ-
ously not accessible to IceCube. A reduction of the
atmospheric muon background below the expected
rate of neutrinos is provided by first vetoing events
in DeepCore with causally related hits in the veto
region. In a second step the potential starting vertex
of a muon track is reconstructed and its credibility
is estimated using a likelihood method. Events with
vertex positions outside of DeepCore or with low
starting probabilities are rejected. We present here
these newly developed veto and vertex reconstruction
techniques and present in detail their capabilities in
background rejection and signal efficiency that have
been obtained so far from full Monte Carlo studies.
Keywords
: high energy neutrino-astronomy, Ice-
Cube, DeepCore
I. INTRODUCTION
The IceCube Neutrino Observatory [2] is currently
being built at the geographic South Pole in Antarctica.
After completion it will consist of
∼
4800 digital optical
modules (DOMs) on 80 strings instrumenting one cubic
kilometer of ice at a depth between 1450 m and 2450 m.
Each DOM consists primarily of a photomultiplier tube
and read-out electronics in a glass pressure vessel.
IceCube is designed to detect highly energetic neutrino-
induced muons as well as hadronic or electro-magnetic
showers (cascades) that produce Cherenkov radiation
in the medium. Significant backgrounds to the signal,
caused by muons from atmospheric air showers above
the detector, limit the field of view to the Northern
hemisphere for many studies that use neutrino events in
IceCube. In addition to its nominal layout, the DeepCore
Fig. 1. Schematic view of the IceCube DeepCore
extension to the observatory will lower the IceCube
energy threshold from
∼
100 GeV down to neutrino
energies as low as 10 GeV. This improvement in the
detector energy response is achieved by including 6 extra
strings, deployed in a denser spacing, around a standard
central IceCube string. Each of these strings will be
equipped with 60 DOMs, containing Hamamatsu high
quantum efficiency photo multiplier tubes (HQE PMTs).
50 of these DOMs will be placed in a dense spacing
of
∼
7 m in the lowest part of the detector where the
ice is clearest and scattering and absorption lengths are
considerably longer [3]. The remaining 10 modules are
to be placed in a 10 m spacing at a depth from 1760 m
to 1850 m. This position has been chosen in order to
improve IceCube’s capabilities to actively identify and
reduce the atmospheric muon background to the central
DeepCore volume, as described below. The HQE PMTs
have a quantum efficiency that is up to 40% higher,
depending on wavelength, compared to the standard
IceCube PMTs, while their noise rate of
∼
380 Hz is
2
O. SCHULZ
et al.
DEEPCORE VETO IMPLEMENTATION
on average increased by about 32%. Together with the 7
neighboring IceCube strings DeepCore will consist of 13
strings and be equipped with 440 optical modules instru-
menting a volume of
∼
13 megatons water-equivalent.
DeepCore will improve the IceCube sensitivity for
many different astrophysics signals like the search for
solar WIMP dark matter and for neutrinos from Gamma-
Ray Bursts [4]. It also opens the possibility to investigate
atmospheric neutrino oscillations in the energy range of
a few tens of GeV [5]. An additional intriguing oppor-
tunity offered by DeepCore is the possibility to identify
neutrino signals from the southern hemisphere. Such a
measurement requires a reduction of the atmospheric
muon background by more than a factor
10
6
in order to
obtain a signal of atmospheric neutrinos to background
rate better than one. A first step toward achieving this
reduction is implicit in the design of DeepCore. Back-
ground events which trigger DeepCore with a minimum
number of hits in the DeepCore fiducial volume must
pass through a larger overburden resulting in a order of
magnitude decrease to the atmospheric muon rate, with
respect to the whole IceCube detector. Two additional
steps are then performed to attain the remaining 10
5
rejection factor. The first is a veto of DeepCore events
with causally related hits in the surrounding IceCube
volume, reducing the background rate by
10
2
to
10
3
.
Then we apply a vertex reconstruction algorithm based
on a maximum-likelihood method that determines the
approximate neutrino interaction vertex. By rejecting
events with a reconstructed vertex outside the central
DeepCore volume a full 10
6
background reduction may
be achieved.
II. TRIGGERING DEEPCORE
The first reduction of the atmospheric muon back-
ground rate, with respect to IceCube, is achieved by
applying a simple majority trigger (SMT) to the Deep-
Core region, based on number of channels registering
a hit in coincidence with a neighbor DOM. The trigger
hit coincidence requirement, also known as hard local
coincidence (HLC), is such that each channel is accom-
panied by at least one more hit on one of the four closest
neighboring modules within a time window of
±
1000 ns.
The rate of atmospheric muons triggering DeepCore is
largely dependent on the multiplicity that is required. In
this study we applied a trigger requirement of 6 HLC hits
(SMT6), which translates to an average neutrino energy
of approximately 10 GeV.
Table I shows the approximate detector rates for the
trigger level and after application of the veto algorithm.
The final rates after application of the cuts on the
reconstructed vertex are not given in the table, since they
are analysis dependent and vary strongly with the cut
strength. The background events, muons from cosmic
ray air showers, are simulated using CORSIKA [6]. In
this study only the most energetic muon is propagated
through the full detector simulation. This is a conserva-
tive approach since the other muons could only improve
Fig. 2. Scheme of the veto principle: Illustration of the DeepCore
hit center of gravity (COG), vertex time and particle speed per hit.
the veto efficiency. The signal rate given here is the rate
of neutrinos produced in atmospheric air showers and
has been determined following the flux calculations of
the Bartol group [7]. Neutrino oscillation effects have
not been taken into account. Since the goal is to identify
starting muon tracks specifically, the signal is restricted
to those events with a simulated interaction vertex within
the DeepCore volume. The efficiencies given in Table I
relate to events that fulfill this requirement and have
a DeepCore SMT6 trigger. The background rejection
factors refer to the expected main IceCube trigger rate,
build up from an IceCube only SMT8 trigger, a String
Trigger which requires 5 out of 7 aligned modules on
a string to be fired in a trigger window of 1500 ns, as
well as the DeepCore SMT6 trigger itself. Applying the
SMT6 trigger gives a background rate which exceeds
the signal by a factor
∼ 10
5
. This sets the challenge for
the performance of the veto algorithms to be applied.
III. THE VETO ALGORITHM
DeepCore is surrounded by more than 4500 DOMs
that can be used as an active veto volume to reject
atmospheric muons. If hits in the surrounding standard
IceCube array are consistent with a particle moving
downwards with v=c the event is rejected. For the veto
algorithm, and also for any following reconstructions,
it is essential to keep as many physics hits as possible.
Therefore all hits in the detector are used here, including
hits on DOMs without HLC (a mode called soft local
coincidence, or SLC). To reduce the amount of dark
noise hits, we reject any hits that are isolated from
others by more than 150 m in distance or by more
than 1000 ns in hit time. To determine whether or not
to reject an event we initially compute the average hit
PMT position and an approximate start time (vertex
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
TABLE I
BACKGROUND AND SIGNAL RATES AFTER DEEPCORE TRIGGER AND CAUSAL HIT VETO
atm.
µ
(CORSIKA)
rejection
atm.
ν
µ
(Bartol)
eff.
atm.
ν
µ
upwards
atm.
ν
µ
down-
wards
main IceCube triggers
2279 Hz
-
-
-
-
-
DeepCore event selection
102 Hz
4.5
·10
2
1.799
·10
3
Hz
100%
0.901
·10
3
Hz
0.895
·10
3
Hz
after Veto
1.2 Hz
5.4
·10
4
1.719
·10
3
Hz
95.5%
0.863
·10
3
Hz
0.856
·10
3
Hz
particle speed [m/ns]
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
probability per event
0
0.1
0.2
0.3
0.4
0.5
0.6
corsika
signal
Fig. 3. Particle speed probabilities per event for atmospheric muons
(dotted line) and muons induced by atmospheric neutrinos inside
DeepCore (solid line).
time) of the DeepCore fiducial volume hits (see Fig.
2). The interaction position is determined by using the
subset of those hit DOMs that have times within one
standard deviation of the first guess vertex time. This has
the benefit of reducing the contribution from PMT dark
noise and, by weighting the DOMs by their individual
charge deposition, a reasonable center of gravity (COG)
for the event is computed. By making the assumption
that roughly all light in the DeepCore volume originates
from the COG a more thorough estimation of the vertex
time is possible. For each individual hit the time light
would have needed to travel from the COG to the hit
module is calculated and subtracted from the original
PMT hit time. The average of these corrected PMT hit
times is then considered as the vertex time.
Each hit in the veto region gets assigned a particle
speed, defined as the spatial hit distance to the DeepCore
COG divided by the time difference to the DeepCore
vertex time. This speed is defined to be positive if the
hit occurred before the vertex time and negative if it
appeared after. Causally related hits in the veto region
are generally expected to have a speed close to the
speed of the muon, which is very close to the speed
of light in vacuum (0.3 m/ns). Smaller speeds occur
for hits that have been scattered and thus arrive late.
Larger speeds are in principle acausal, but since the
vertex time represents the start of a DeepCore event,
whereas the COG defines its center, the particle speeds
for early hits are slightly overestimated. Late hits on the
other hand have typically lower speeds. Fig. 3 shows
the probability of the occurrence of a particular particle
speed per event. The dotted curve describes the simu-
Fig. 4. Principle of the vertex reconstruction
lated muon background from air-showers (CORSIKA)
and the solid curve the atmospheric neutrino signal [7]
with an interaction vertex inside DeepCore. The peak for
the CORSIKA muons is slightly above +0.3 m/ns while
muons induced by neutrinos in DeepCore mainly give
hits with negative particle speeds. The peak at positive
speeds close to zero is mainly due to early scattered
light. By cutting out all events with more than one
hit within a particle speed window between 0.25 and
0.4 m/ns we achieve an overall background rejection on
the order of
5 · 10
4
(see Table I).
IV. THE VERTEX RECONSTRUCTION
To achieve the remaining background rejection, a
second algorithm is used. It analyzes the pattern of
hits in an event in conjunction with an input direction
and position of a reconstructed track. From the track
the algorithm estimates the neutrino interaction vertex
and calculates a likelihood ratio which is used as a
measurement for the degree of belief that the track is
starting at the estimated position.
As shown in Fig. 4, we trace back from each hit
DOM to the reconstructed track using the Cherenkov
angle of 41
◦
in ice. This projection is calculated for
all DOMs within a cylindrical volume of radius 200 m
around the track and the DOMs are ordered according
to this position. (Note that 200 m is large enough to
contain virtually all photons produced by the track.)
The projection of the first hit DOM in the up-stream
direction defines the neutrino interaction (reconstructed)
vertex. A reconstructed vertex inside IceCube indicates
a potential starting (neutrino-induced) track. Due to the
large distance between neighboring strings, atmospheric
muons may leak through the veto, producing their first
hit deep inside the detector and thus mimicking the
4
O. SCHULZ
et al.
DEEPCORE VETO IMPLEMENTATION
likelihood ratio
-20 -18 -16 -14 -12 -10
-8
-6
-4
-2
0
# events [a.u.]
-3
10
-2
10
10
-1
corsika
signal
reconstructed vertex r [m]
0
100 200 300 400 500 600 700 800 900 1000
reconstructed vertex depth [m]
-3000
-2800
-2600
-2400
-2200
-2000
-1800
-1600
-1400
-1200
-1000
# events [a.u.]
1
10
2
10
3
10
corsika
reconstructed vertex r [m]
0
100 200 300 400 500 600 700 800 900 1000
reconstructed vertex depth [m]
-3000
-2800
-2600
-2400
-2200
-2000
-1800
-1600
-1400
-1200
-1000
# events [a.u.]
1
10
2
10
signal
Fig. 5. Distributions of the cut parameters of the vertex reconstruction: Likelihood ratio (left) and position of the reconstructed vertex for
atmospheric muons from CORSIKA (middle) and atmospheric neutrinos (right).
signature of a starting track. Therefore it is necessary
to quantify for each event the probability of actually
starting at the reconstructed vertex. To determine this
starting likelihood, one first selects all DOMs without
a hit and with a projection on the assumed track up-
stream of the first hit DOM. The probability that each
of these DOMs did not receive a hit is calculated
assuming two track hypotheses: a track starting at the
reconstructed vertex and a track starting outside the
detector volume. Under the assumption of an external
track
p(noHit|
Track) is calculated. Here, for each DOM
the probability of not being hit (in spite of the passing
track) depends on track parameters (energy of the light
emitting particle, position and direction of the track) and
ice properties. The probability is calculated from the
expected number of photoelectrons, taken from
Photorec
tables of the
Photonics
project [8], assuming Poisson
statistics:
p
λ
(noHit) = p
λ
(0) =
λ
0
0!
e
λ
= e
λ
.
(1)
λ
is the expected number of photoelectrons. Under
the assumption of a starting track
p(noHit|
noTrack) is
calculated, which is equal to the probability of a noise
hit and can therefore be calculated from measured noise
rates.
The likelihood for the observed pattern of hit DOMs
may now be constructed as the product of the individual
hit probabilities. A track is classified as starting in the
detector according to the probability given by the ratio of
the likelihoods. For a clearly starting track this ratio is a
negative number, and the larger the value the higher the
starting probability for the track. To select tracks starting
inside the detector, cuts are applied on the position of
the reconstructed vertex and on the likelihood ratio. The
distributions of the cut parameters are shown in Fig. 5.
Preliminary studies are up to now utilizing the true
simulated track, since dedicated low energy track re-
constructions are still under development. Even though
idealized, these studies strongly indicate that an overall
background rejection of
> 10
6
can be achieved without
having to extend the vertex cuts into the densely instru-
mented DeepCore fiducial region and with keeping the
majority of the signal events.
V. SUMMARY AND OUTLOOK
We have presented the methods developed thus far to
reduce the rate of background muon events within the
IceCube DeepCore detector. Utilizing the instrumented
standard IceCube volume around DeepCore as an active
veto to identify and reject atmospheric muon events
improves the possibility of detecting neutrino induced
muons and cascades independent of direction. The rate
of atmospheric muons is mainly reduced in a two step
process. First, a veto algorithm is applied against Deep-
Core events with causally related hits in the surrounding
IceCube region. Second, applied to the veto surviving
events, a cut has been defined, using a likelihood ratio,
to determine the probability that the event had a starting
vertex within the fiducial region of the detector. Monte
Carlo studies indicate that both methods together will
be suitable to reduce the background muon rate by
more than the factor of
10
6
needed to obtain a signal
(atmospheric neutrinos) to background ratio of
> 1
.
IceCube and DeepCore are currently under construction
and will be finished in 2011. The fully deployed Deep-
Core detector will provide an effective volume of several
megatons of water equivalent for neutrino events with an
energy above 10 GeV and a starting vertex in DeepCore.
The exact volume will depend on the required signal-to-
noise ratio and the individual analysis strategies.
REFERENCES
[1] E. Resconi, et al., IceCube Collaboration, Proceedings to
VLVnT08, (2008).
[2] A. Achterberg, et al., IceCube Collaboration, Astroparticle
Physics 26, 155 (2006).
[3] M. Ackermann, et al., IceCube Collaboration, Journal of Geo-
physical Research, Vol. 111, D13203 (2006).
[4] J. N. Bahcall and P. Meszaros, Physics Review Letters, Vol. 85,
p. 1362, (2000).
[5] E. K. Akhmedov, M. Maltoni and A. Y. Smirnov, Journal of High
Energy Physics 5, 77 (2007).
[6] D. Heck et al, Report FZKA 6019 (1998).
[7] G. D. Barr, et al., Physical Review D 70, 023006 (2004).
[8] J. Lundberg et al., Nuclear Instruments and Methods, Vol. A581,
pp. 619-631, (2007).
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Acoustic detection of high energy neutrinos in ice: Status and
results from the South Pole Acoustic Test Setup
Freija Descamps
∗
for the IceCube Collaboration
†
∗
Department of Subatomic and Radiation Physics, University of Ghent, 9000 Ghent, Belgium,
†
See the special section of these proceedings.
Abstract
. The feasibility and specific design of an
acoustic neutrino detection array at the South Pole
depend on the acoustic properties of the ice. The
South Pole Acoustic Test Setup (SPATS) has been
built to evaluate the acoustic characteristics of the
ice in the 1 to 100 kHz frequency range. The most
recent results of SPATS are presented.
Keywords
: SPATS, acoustic neutrino detection,
acoustic ice properties
I. INTRODUCTION
The predicted ultra-high energy (UHE) neutrino fluxes
from both hadronic processes in cosmic sources and
interaction of high energy cosmic rays with the cosmic
microwave background radiation are very low. There-
fore extremely large detector volumes, on the order of
100 km
3
or more, are needed to detect a significant
amount of these neutrinos. The idea of a large hybrid
optical-radio-acoustic neutrino detector has been de-
scribed and simulated in [1], [2]. The density of detectors
in such a possible future UHE neutrino telescope is
dictated by the signal to noise behaviour with travelled
distance of the produced signals. The ice has optical
attenuation lengths on the order of 100 m, enabling
the construction and operation of the IceCube [3] de-
tector. The relatively short optical attenuation length
makes building a detector much larger than IceCube pro-
hibitively expensive. In contrast, the attenuation lengths
of both radio and acoustic waves are expected to be
larger [4], [5] and may make construction of a larger
detector feasible. The South Pole Acoustic Test Setup
(SPATS) has been built and deployed to evaluate the
acoustic attenuation length, background noise level, tran-
sient rates and sound speed in the South Pole ice-
cap in the 1 to 100 kHz region so that the feasibility
and specific design of an acoustic neutrino detection
array could be assessed. Acoustic waves are bent to-
ward regions of lower propagation speed and the sound
speed vertical profile dictates the refraction index and
the resulting radius of curvature. An ultra-high energy
neutrino interaction produces an acoustic emission disk
that will be deformed more for larger sound speed
gradients and so the direction reconstruction will be
more difficult. A good event vertex reconstruction allows
accurate rejection of transient background. The absolute
level and spectral shape of the continuous background
noise determine the threshold at which neutrino induced
signals can be extracted from the background and thus
set the lower energy threshold for a given detector
configuration. Transient acoustic noise sources can be
misidentified as possible neutrino candidates, therefore
a study of transient sources and signal properties needs
to be performed. The acoustic signal undergoes a geo-
metric 1/r attenuation and on top of that scattering and
absorption. The overall acoustic attenuation influences
detector design and hence cost.
II. INSTRUMENTATION
A. The SPATS array
The South Pole Acoustic Test Setup consists of four
vertical strings that were deployed in the upper 500 me-
ters of selected IceCube holes [3] to form a trapezoidal
array, with inter-string distances from 125 to 543 m.
Each string has 7 acoustic stages. Figure 1 shows a
schematic of the SPATS array and its in-ice and on-
ice components. It also shows a schematic drawing
of an acoustic stage comprised of a transmitter and
sensor module. The transmitter module consists of a
steel pressure vessel that houses a high-voltage pulse
generator board and a temperature or pressure sensor.
Triggered HV pulses are sent to the transmitter, a ring-
shaped piezo-ceramic element that is cast in epoxy for
electrical insulation and positioned
∼13
cm below the
steel housing. Azimuthally isotropic emission is the
motivation for the use of ring shaped piezo-ceramics.
The actual emission directivity of such an element was
measured in azimuthal and polar directions [6]. The
sensor module has three channels, each 120
◦
apart in
azimuth, to ensure good angular coverage.
B. The retrievable pinger
A retrievable pinger was deployed in 10 water-filled
IceCube holes down to a depth of 500 m: 6 holes were
pinged in December 2007-January 2008 and 4 more
holes were pinged using an improved pinger design in
December 2008-January 2009. The emitter
1
is a broad
band omni-directional transmitter which has a transmis-
sion power of 149 dB re (
µ
Pa/V) at 1 meter distance.
Upon receiving a trigger signal, a short (∼
50µs
) HV
pulse is sent to the piezo-ceramic transmitter that is
suspended about 2 m below the housing. A broadband
acoustic pulse is then emitted. Both the SPATS array
and the pinger are GPS synchronized so that the arrival
times of the pinger-pulses can be determined. Data was
1
ITC-1001 from the International Transducer Company
2
FREIJA DESCAMPS
et al.
SPATS STATUS AND RESULTS
Fig. 1. Schematic of the SPATS detector.
collected for all SPATS-sensor levels at 80, 100, 140,
190, 250, 320, 400, 430 and 500 m depth and the
pinger was pulsed at repetition rates of 1, 8 or 10 Hz.
At the SPATS instrumented depths, the pinger lowering
was stopped for five minutes so that a signal could
be recorded with every SPATS sensor at that pinger
position. For the December 2008-January 2009 holes,
no stop was made at 80, 100, 140 and 430 m depth.
III. RESULTS
A. Sound speed
The sound speed analysis uses data from December
2007-January 2008 geometries where the pinger and
sensor were at the same depth and 125 m apart. The
pinger emitter was situated in a column of water where
no shear waves can propagate, nevertheless shear waves
were generated at the water-ice boundary where mode
conversion was expected. Therefore transit times can
be extracted from the data for both pressure and shear
waves for many instrumented SPATS levels. There is
agreement with [7] for the pressure wave measurement
in the not fully compacted ice of the upper region (firn).
The extracted shear wave speed is about half of the
pressure wave speed, as expected [8]. A linear fit was
made to the data in the deep and fully compacted ice
between 250 and 500 m depth. We find following results
for the pressure and shear wave sound speeds (
v
p
and
v
s
) and their variation with depth (gradient:
g
p
and
g
s
):
v
p
(375m) = (3878 ± 12)m/s,
g
p
= (0.09 ± 0.13)(m/s)/m,
v
s
(375m) = (1975.8 ± 8.0)m/s,
g
s
= (0.067 ± 0.806)(m/s)/m.
The gradient for both pressure and shear waves is
consistent with zero. Both sound speed measurements
are performed with a better than 1% precision, taking
into account the errors on the horizontal distance, pinger
and sensor depths, emission and arrival times. For more
details on the SPATS sound speed analysis, see [9].
B. Gaussian noise floor
The noise is monitored in SPATS through a forced
200 kHz read-out of all sensor channels for 0.5 s
every hour. The distribution of ADC counts in each
of the operational channels is Gaussian and stable dur-
ing the present observation time of more than 1 year
with a typical deviation of the mean noise level of
σ
RM S
<RMS>
< 10
2
. The SPATS sensors on strings
A, B and C have been calibrated in liquid water at
0
◦
C in the 10 to 80 kHz frequency range prior to
deployment [6]. Lab measurements have shown [6],
[10] that the sensitivity of the SPATS sensor in air at
atmospheric pressure increases by a factor of 1.5
±
0.2
when cooled down from 0
◦
C to -50
◦
C. We use this
value to estimate the sensitivity of the SPATS sensors
after they were deployed in the cold Antarctic ice. A
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
measurement of the sensitivity at room temperature as a
function of static pressure was performed in a pressure
vessel and the results indicate a change in sensitivity of
< 30%
in the 1 to 100 bar region [10]. The noise level
and fluctuations are high for all 4 strings in the firn
region, where the transition from a snow/air mixture to
compact bulk ice takes place. This is consistent with the
effect of a large sound speed gradient in that layer so
that the surface noise is refracted back to the surface. In
the fully compacted ice, below the firn, noise conditions
are more stable and we can derive an average noise
level below 10mPa in the relevant frequency range (10
to 50kHz). For more details on the SPATS noise floor
analysis, see [11].
C. Transient events
The SPATS detector has been operated in transient
mode for 45 minutes of every hour since August 2008.
If the number of ADC counts on any of the twelve
monitored channels exceeds a certain level above noise,
we record a 5ms window of data around the trigger on
that channel. The resulting trigger rate is stable and on
the order of a few triggers every minute for each of
the twelve monitored channels. Most of these events
are Gaussian noise events, where only one sample is
outside the trigger boundaries. The transient events are
processed off-line and analysed for time-coincidence
clustering. Figure 2 shows the spatial distribution of a
total of 4235 reconstructed transient events as detected
by SPATS between 1 September 2008 and 23 April
2009 (4422 hours integrated lifetime). The data shows
clear and steady sources or ’hot spots’ that can be
correlated with the refreezing process of the water in
sub-surface caverns. Each IceCube drilling season, a
Fig. 2.
An overview of the spatial distribution of transient events
as detected by SPATS between September 2008 and April 2009. The
circles indicate the positions of the IceCube holes, the SPATS strings
are indicated by their corresponding letter-ID. The “Rodriguez well”
are represented by squares. The smearing-effect is an artefact of the
event reconstruction algorithm due to not accounting for refraction in
the firn.
“Rodriguez wel” (RW) is used as a water reservoir.
The main source of transients is the 07-08 RW. There
is also steady detection of the 05-06/04-05 RW. The
06/07 RW was a steady source until October 2008.
The 08-09 RW has not yet been detected. Transient
data-taking continued during the IceCube 08-09 drilling
season and the refreezing of 12 holes nearest to the
SPATS array are audible whereas 7 of the farthest are
not. No vertices were reconstructed deeper than 400 m.
D. Attenuation length
We have performed 3 classes of attenuation analyses,
all of which use the sensors on the frozen-in SPATS
strings but with a different sound source: the retrievable
pinger, the frozen-in SPATS transmitters and transient
events. Table I gives an overview of the SPATS atten-
uation length studies with their respective uncertainties.
distance [m]
250
300
350
400
450
500
550
600
ln (P R) a.u.
11.0
11.2
11.4
11.6
11.8
12.0
12.2
12.4
12.6
12.8
λ
= 280 +/− 13 m
preliminary
Fig. 3. An example of a (ln(amplitude
·
distance) vs. distance)-fit for
a single channel at 400 m depth.
1) Pinger attenuation length:
The retrievable pinger
was recorded simultaneously by all SPATS sensors for
each of the 4 IceCube holes in which it was deployed
during the December 2008-January 2009 period. The
pinger holes were almost perfectly aligned relative to
the SPATS array, making the single-channel analysis
independent of polar and azimuthal sensitivity variation.
The energy of the signal, which dominates in the fre-
quency range from 5 to 35 kHz, was extracted both
from the waveforms in the time domain (4 pinger holes)
and from the power spectrum (3 pinger holes). Figure 3
shows an example of a single-level fit for the energy in
the frequency domain (single channel at 400 m depth).
For both analyses, the quoted value in Table I is the
mean of all fits with the standard deviation as error.
The result for the time domain analysis uses all levels
as shown in Fig. 4 and points which have large error
bar (
λ
σ
λ
< 3)
have been excluded (4/47 combinations).
There is no evidence for a depth-dependence of the
attenuation length.
2) Inter-string attenuation length:
The complete
SPATS inter-string set of April 2009 consists of all
possible transmitter-sensor combinations excluding the
shallowest transmitters. The transmitters were fired at
25Hz repetition rate and for each combination a total
of 500 pulses was recorded simultaneously by the three
channels of the sensor module. An averaged waveform
4
FREIJA DESCAMPS
et al.
SPATS STATUS AND RESULTS
TABLE I
SPATS ATTENUATION LENGTH STUDIES.
Attenuation analysis
λ
(m)
uncertainty
comment on uncertainty
Pinger energy time domain
320
60m
standard deviation of distribution
Pinger energy frequency domain
270
90m
standard deviation of distribution
Interstring energy all levels
320
100m
standard error of weighted mean of distribution
Interstring energy 3-level ratio
193
-
best fit
Transient event analysis
200
-
best fit
0
100
200
300
400
500
600
700
800
900
1000
module
C190
C250
C320
C400
B190
B250
B320
B400
A190
A250
A320
A400
D190
D250
D320
D400
D500
attenuation length [m]
Preliminary
Fig. 4. Pinger energy analysis result from time domain for all recorded
levels.
can therefore be obtained and the pressure wave energies
extracted. An averaged noise-waveform allows us to
subtract the noise contribution. Every transmitter in
SPATS can be detected by 3 sensors at the same depth
and different distances (strings), this means that only
the unknown sensor sensitivities have to be included
into the systematic error for a single-level attenuation
length fit. We have fitted all data of the transmitters that
are detected at three different distances. Each single fit
does not constrain the attenuation length very well due
to the large sensor-to-sensor variations in sensitivity. By
combining all fits, we find a weighted mean and the
standard deviation of the weighted mean as value for
the attenuation length and its error (Table I). A way
to work around the missing calibration of the sensors
and transmitters is to build ratios of amplitudes using
two transmitter-sensor pairs. For isotropic sensors and
transmitters, one such measurement should yield the at-
tenuation length. To minimize the remaining variation of
the sensitivity due to the varying polar angle, we limited
the amplitude ratios to the levels at 250, 320 and 400 m
depth. This ratio-method yields an attenuation length
with a large systematic error due to unknown azimuthal
and remaining polar sensor orientations, therefore only
the best fit is quoted in Table I.
3) Transient attenuation length:
If a transient source
can be localized, its corresponding signals can be used
to estimate the acoustic attenuation length. The strongest
transient source (07/08 RW) is excluded since it is too
loud so that some channels are saturated. Moreover, all
four of the SPATS strings are at similar distances from
07/08 RW. The transient events from the refreezing of
hole 37 offer the advantage of mostly being inside the
dynamical range of all sensors. In addition, hole 37 was
pinged and all sensors recorded signals simultaneously
from the pinger at a depth of 320 m. This pinger
data can be used to extract relative sensitivity-correction
factors for that precise direction for all sensor channels
at 320 m depth on all 4 SPATS strings. The best fit of
the sensitivity corrected single level transient data yields
an attenuation length of 200 m.
IV. CONCLUSIONS
We have presented the most recent results from the
SPATS setup. Both pressure and shear wave speeds have
been mapped versus depth in firn and bulk ice. This is
the first measurement of the pressure wave speed below
190 m depth in the bulk ice, and the first measurement
of shear wave speed in the South Pole ice. The resulting
vertical sound speed gradient for both pressure and
shear waves is consistent with no refraction between
250 and 500 m depth. Extrapolating sensitivities from
laboratory calibrations gives a first estimation of the
absolute noise level at depths larger than 200 m and
indicates values below 10 mPa integrated over the 10 to
50 kHz frequency range. The in-ice transient rates are
low, most of the transient events can be correlated with
well-know anthropogenic sources. We have presented an
overview of the different SPATS attenuation length stud-
ies, the preliminary results show that the analyses favour
attenuation lengths in the 200-350 m region. The current
dataset does not allow a distinction to be made between
absorption- or scatter-dominated attenuation length and
dedicated measurements are under consideration.
V. ACKNOWLEDGMENT
We are grateful for the support of the U.S. National
Science Foundation and the hospitality of the NSF
Amundsen-Scott South Pole Station.
REFERENCES
[1] D. Besson
et al.
, Int. J. Mod. Phys. A
21S1
(2006) 259.
[2] D. Besson
et al.
, Nucl. Instr. and Meth. A (2009),
doi:10.1016/j.nima.2009.03.047.
[3] A. Achterberg
et al.
, Astropart. Phys.
26
(2009) 155.
[4] S. Barwick
et al.
, J. Glaciol.
51
(2005) 231.
[5] P.B. Price, J. Geophys.Res.
111
(2006) B02201.
[6] J.-H. Fisher, Diploma thesis, Humboldt-Universitat zu Berlin
(2006).
[7] J. G. Weihaupt, Geophys. 28
(4)
(1963) 582.
[8] D.G. Albert, Geophys. Res. Let.
25 No 23
(1998) 4257.
[9] F. Descamps
et al.
,
Nucl. Instr. and Meth.
A(2009),
doi:10.1016/j.nima.2009.03.062.
[10] T. Karg
et al.
, these proceedings.
[11] T. Karg
et al.
, Nucl. Instr. and Meth. A (2009),
doi:10.1016/j.nima.2009.03.063.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Sensor development and calibration for acoustic
neutrino detection in ice
Timo Karg
∗
, Martin Bissok
†
, Karim Laihem
†
, Benjamin Semburg
∗
, and Delia Tosi
‡
for the IceCube collaboration
§
∗
Dept. of Physics, University of Wuppertal, D-42119 Wuppertal, Germany
†
III Physikalisches Institut, RWTH Aachen University, D-52056 Aachen, Germany
‡
DESY, D-15735 Zeuthen, Germany
§
See the special section of these proceedings.
Abstract
. A promising approach to measure the
expected low flux of cosmic neutrinos at the highest
energies (E
>
1 EeV) is acoustic detection. There
are different in-situ test installations worldwide in
water and ice to measure the acoustic properties
of the medium with regard to the feasibility of
acoustic neutrino detection. The parameters of inter-
est include attenuation length, sound speed profile,
background noise level and transient backgrounds.
The South Pole Acoustic Test Setup (SPATS) has
been deployed in the upper 500 m of drill holes for
the IceCube neutrino observatory at the geographic
South Pole. In-situ calibration of sensors under the
combined influence of low temperature, high ambient
pressure, and ice-sensor acoustic coupling is difficult.
We discuss laboratory calibrations in water and ice.
Two new laboratory facilities, the Aachen Acoustic
Laboratory (AAL) and the Wuppertal Water Tank
Test Facility, have been set up. They offer large
volumes of bubble free ice (3 m
3
) and water (11 m
3
)
for the development, testing, and calibration of acous-
tic sensors. Furthermore, these facilities allow for
verification of the thermoacoustic model of sound
generation through energy deposition in the ice by a
pulsed laser. Results from laboratory measurements
to disentangle the effects of the different environmen-
tal influences and to test the thermoacoustic model
are presented.
Keywords
: acoustic neutrino detection, thermo-
acoustic model, sensor calibration
I. INTRODUCTION
The detection and spectroscopy of extra-terrestrial
ultra high energy neutrinos would allow us to gain new
insights in the fields of astroparticle and particle physics.
Apart from the possibility to study particle acceleration
in cosmic sources, the measurement of the guaranteed
flux of cosmogenic neutrinos [1] opens a new window
to study cosmic source evolution and particle physics
at unprecedented center of mass energies. However, the
fluxes predicted for those neutrinos are very low [2], so
detectors with large target masses are required for their
detection. One possibility to instrument volumes of ice
of the order of
100km
3
with a reasonable number of
sensor channels is to detect the acoustic signal emitted
from the particle cascade at a neutrino interaction vertex
[3].
To study the properties of Antarctic ice relevant for
acoustic neutrino detection the South Pole Acoustic Test
Setup (SPATS) [4] has been frozen into the upper part of
IceCube [5] boreholes. SPATS consists of four vertical
strings reaching a depth of 500 m below the surface.
The horizontal distances between strings cover the range
from 125 m to 543 m. Each string is instrumented with
seven acoustic sensors and seven transmitters. The ice
parameters to be measured are the sound speed profile,
the acoustic attenuation length, the background noise
level, and transient noise events in the frequency range
from 1 kHz to 100 kHz.
For the design of a large scale acoustic neutrino
detector it is crucial to fully understand the
in-situ
response of the sensors as well as the thermoacoustic
sound generation mechanism.
II. SENSOR CALIBRATION
To study the acoustic properties of the Antarctic
ice, like the absolute background noise level, and to
deduce the arrival direction and energy of a neutrino
in a future acoustic neutrino telescope it is essential to
measure the sensitivity and directionality of the sensors
used, i.e. the output voltage as function of the incident
pressure, and its variation with the arrival direction of
the incident acoustic wave relative to the sensor. These
measurements can be carried out relatively easily in the
laboratory in liquid water. The two calibration methods
most commonly used are
•
the comparison method, where an acoustic signal
sent by a transmitter (with negligible angular vari-
ation) is simultaneously recorded at equal distance
with a pre-calibrated receiver and the sensor to be
calibrated. A comparison of the signal amplitudes
in the two receivers allows for the derivation of the
desired sensitivity from the sensitivity of the pre-
calibrated sensor.
•
the reciprocity method, which makes use of the
electroacoustic reciprocity principle to determine
the sensitivity of an acoustic receiver without hav-
ing to use a pre-calibrated receiver (see e.g. [7]).
2
KARG
et al.
ACOUSTIC SENSOR CALIBRATION
All SPATS sensors have been calibrated in
0
◦
C water
with the comparison method [8]. However, both calibra-
tion methods are not suitable for in-situ calibration of
sensors in South Pole ice. There are no pre-calibrated
sensors for ice available, and reciprocity calibration
requires large setups which are not feasible for de-
ployment in IceCube boreholes. Further, directionality
studies require a change of relative positioning between
emitter and receiver which is difficult to achieve in a
frozen-in setup.
It is not clear how results obtained in the laboratory
in liquid water can be transferred to an in-situ situation
where the sensors are frozen into Antarctic ice. We
are studying the influence of the following three envi-
ronmental parameters on the sensitivity separately: low
temperature, increased ambient pressure, and different
acoustic coupling to the sensor. We will assume that sen-
sitivity variations due to these factors obtained separately
can be combined to a total sensitivity change for frozen
in transmitters. This assumption can then be checked
further using the two different sensor types deployed
with the SPATS setup. Apart from the standard SPATS
sensors with steel housing two HADES type sensors
[9] have been deployed with the fourth SPATS string.
These contain a piezoceramic sensor cast in resin and
are believed to have different systematics.
A. Low temperatures
The ice temperature in the upper few hundred meters
of South Pole ice is
50
◦
C [10]. It is not feasible to
produce laboratory ice at this temperature in a large
enough volume to carry out calibration studies. We
study the dependence of the sensitivity on temperature
in air. A signal sent by an emitter is recorded with a
sensor at different temperatures. To prevent changes in
the emissivity of the transmitter, the transmitter is kept
at constant temperature outside the freezer, and only
the sensor is cooled down. The recorded peak-to-peak
amplitude is used as a measure of sensitivity. First results
indicate a linear increase of sensitivity with decreasing
temperature (cf. Fig. 1). The sensitivity of a SPATS
sensor is increased by a factor of
1.5 ± 0.2
when the
temperature is lowered from
0
◦
C to
50
◦
C (averaged
over all three sensor channels).
B. Static pressure
Acoustic sensors in deep polar ice are exposed to
increased ambient static pressure. During deployment
this pressure is exerted by the water column in the bore-
hole (max. 50 bar at 500 m depth). During re-freezing it
increases since the hole freezes from the top, developing
a confined water volume. The pressure is believed to
decrease slowly as strain in the hole ice equilibrates to
the bulk ice volume. The final static pressure on the
sensor is unknown.
A
40.5
cm inner diameter pressure vessel is available
at Uppsala university that allows for studies of sensor
−70
−60
−50
−40
−30
−20
−10
0
10
1
1.5
2
2.5
3
3.5
4
4.5
5
preliminary
temperature [C]
amplitude peak to peak [V]
SignalVsTemp−Ch0
fit: y=a x+b
a = −0.021 +/ 0.001 T/
o
C
b = 2.32 +/ 0.03
Fig. 1.
Measured peak-to-peak amplitude of one SPATS sensor
channel in air at different temperatures and linear fit to the data.
0
20
40
60
80
100
0
1
2
3
4
5
6
preliminary
pressure [bar]
amplitude Vpp
Feb09−POS3−20kHz−4V−Pscan−allCh
0
1
2
Fig. 2. Measured peak-to-peak amplitude of a SPATS sensor excited
by a transmitter coupled from the outside to the pressure vessel. All
three channels of the sensor are shown.
sensitivity as function of ambient pressure. Static pres-
sures between 0 and 800 bar can be reached. In this
study the pressure is increased up to 100 bar. Acoustic
emitters for calibration purposes can be placed inside
the vessel or, free of pressure, outside of it. Cable feeds
allow one to operate up to two sensors or transmitters
inside the vessel. A sensor is placed in the center of
the water filled vessel. The transmitter is coupled from
the outside to the vessel. The recorded peak-to-peak
amplitude is used as a measure of sensitivity while the
pressure is increased. The sensor sensitivity is measured
by transmitting single cycle gated sine wave signals with
different central frequencies from 5 kHz to 100 kHz.
Figure 2 shows the received signal amplitudes for the
three sensor channels of a SPATS sensor as a function
of ambient pressure. No systematic variation of the
sensitivity with ambient pressure is observed. Combin-
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
ing all available data we conclude that the variation
of sensitivity with static pressure is less than 30% for
pressures below 100 bar.
C. Sensor-ice acoustic coupling
The acoustic coupling, i.e. the fractions of signal en-
ergy transmitted and reflected at the interface of medium
and sensor, differs significantly between water and ice.
It can be determined using the characteristic acoustic
impedance of the medium and sensor, which is the
product of density and sound velocity and is equivalent
to the index of refraction in optics. Due to the different
sound speeds the characteristic acoustic impedance of
ice is about
2.5
times higher than in water.
Its influence will be studied in the Aachen Acoustic
Laboratory (Sec. III), where it will be possible to carry
out reciprocal sensor calibrations in both water and ice,
and also to use laser induced thermoacoustic signals as
a calibrated sound source.
III. NEW LABORATORY FACILITIES
Two new laboratories have been made available to the
IceCube Acoustic Neutrino Detection working group for
signal generation studies and sensor development and
calibration.
a) Wuppertal Water Tank Test Facility:
For rapid
prototyping of sensors and calibration studies in water,
the Wuppertal Water Tank Test Facility offers a cylin-
drical water tank with a diameter of
2.5
m and depth
of
2.3
m (
11m
3
). The tank is built up from stacked
concrete rings and has a walkable platform on top. It
is equipped with a positioning system for sensors and
transmitters and a 16-channel PC based DAQ system
(National Instruments USB-6251 BNC).
The size of the water volume allows for the clean sep-
aration of emitted acoustic signals and their reflections
from the walls and surface. This makes it possible to
install triangular reciprocity calibration setups with side
lengths of up to 1 m. Further, installations to measure the
polar and azimuthal sensitivity of sensors are possible.
b) Aachen Acoustic Laboratory:
The Aachen
Acoustic Laboratory is dedicated to the study of thermo-
acoustic sound generation in ice. A schematic overview
of the setup can be seen in Fig. 3. The main part is a
commercial cooling container (
6 × 2.5 × 2.5 m
3
), which
can reach temperatures down to
25
◦
C. An IceTop
tank, an open cylindrical plastic tank with a diameter
of 190 cm and a height of 100 cm [6], is located inside
the container. The IceTop tank has a freeze control unit
by means of which the production of bubble-free ice is
possible. The freeze control unit mainly consists of a
cylindrical semipermeable membrane at the bottom of
the container, which is connected to a vacuum reservoir
and a pressure regulation system. The membrane allows
for degassing of the water. A total volume of
≈ 3 m
3
of
bubble-free ice can be produced. A full freezing cycle
takes approximately sixty days with the freezing going
from top to bottom.
Fig. 3. Overview of the AAL setup with zoom on the mirror holder
(top), the sensor positioning system (bottom, left) and a sensor (bottom,
right).
On top of the container, a Nd:YAG Laser is installed in
a light-tight box with an interlock connected to the laser
control unit. The laser has a pulse repetition rate of up to
20 Hz and a peak energy per pulse of 55 mJ at 1064 nm,
30 mJ at 532 nm, and 7 mJ at 355 nm wavelength. The
laser beam is guided into the container and deposited in
variable positions on the ice surface by a set of mirrors
with coatings for the above mentioned frequencies. The
optical feed-through consists of a tilted quartz window
to avoid damage of the laser cavity by reflected laser
light. For the detection of thermoacoustic signals, 18
sensors are mounted on a sensor positioning system.
The positioning system has three levels, on each level 6
sensors are placed in a hexagonal geometry. Along with
the sensors, 18 sound emitters are deployed for calibra-
tion and test purposes. The sensors will be calibrated
reciprocally. The positioning system will also include
a reciprocal calibration setup for HADES sensors and
the ability to install a SPATS sensor for calibration pur-
poses. In addition, two temperature sensors are deployed
at each level. The acoustic sensors are pre-amplified
low-cost piezo based ultrasound sensors, usually used
for distance measurement. The sensors show a strong
variation of signal strength with incident angle. This
directionality has to be studied but is rather useful for
the suppression of reflected signals. The sensors are read
out continuously by a LabVIEW-based DAQ framework
with a NI PCIe-6259M DAQ card. The framework
includes a temperature and acoustic noise monitoring
4
KARG
et al.
ACOUSTIC SENSOR CALIBRATION
system.
IV. STUDIES OF THERMOACOUSTIC SIGNAL
GENERATION
A detailed understanding of thermoacoustic sound
generation in ice is crucial for designing an acoustic
extension to the IceCube detector. The dependence of the
signal strength on the deposited energy as well as on the
distance to the sensor is of great interest. Also the pulse
shape and the frequency content have to be studied sys-
tematically with respect to various cascade parameters.
The spatial distribution of the acoustic signal has to be
investigated, i.e. the acoustic disk and its dependence on
the spatial and temporal energy deposition distribution.
In addition, the AAL setup will be able to study the
thermoacoustic effect in a wide temperature range from
20
◦
C to
25
◦
C and possible differences of the effect
in ice and water.
In the Aachen Acoustic Laboratory, the thermo-
acoustic signal is generated by a Nd:YAG laser. A laser-
induced thermoacoustic signal differs from a signal pro-
duced in a hadronic cascade. While a cascade’s energy
deposition profile can be described by a Gaisser-Hillas
function, the laser intensity drops of exponentially. Also
the lateral profile of a cascade follows a NKG function,
where, assuming a TEM
00
mode in the far field region,
the typical laser-beam profile is Gaussian. Knowing this,
a recalculation of the signal properties from a laser-
induced pulse to a cascade-generated pulse is possible.
The frequency content of the signal is expected to vary
with the beam diameter, while a too short penetration
depth will result in an acoustic point source rather than
a line source. The absorption coefficient of light in
water or ice varies strongly with wavelength. The first
wavelength of the laser (1064 nm) is absorbed after few
centimeters, while the second harmonic (532 nm) has
an absorption length of
≈ 60
m. The third harmonic at
355 nm with an absorption length of
≈ 1
m is expected
to be the most suitable wavelength to emulate a hadronic
cascade with a typical length of 10 m and a diameter of
10 cm. The diameter of the heated ice volume has to be
controlled by optics inside the container that will widen
the beam.
With an array of 18 sensors, the Aachen Acoustic Lab-
oratory allows the study of the spatial distribution of the
generated sound field, as well as the frequency content
with varying beam parameters. The first thermoacoustic
signal has been generated and detected in a test setup
with preliminary sensor electronics in order to determine
a reasonable gain for the pre-amplifier while avoiding
saturation. A small volume of bubble-free ice has been
produced, containing a sensor and an emitter. Laser
pulses (wavelength 1064 nm, 55 mJ per pulse) have been
shot at the ice block. A zoom on the first waveform is
presented in Fig. 4. The distance between laser spot and
sensor is approximately
15 cm
and the sensor gain factor
is 22. A Fourier transform of the pulse implies a pulse
central frequency of
≈ 100 kHz
, which is expected for
t [ms]
0
0.5
1
1.5
2
2.5
3
3.5
4
U [V]
−0.1
−0.05
0
0.05
0.1
Fig. 4. Thermoacoustic pulse in ice, generated with a laser at 1064 nm
wavelength and a beam diameter of
≈ 1 µm
.
such a small beam diameter. In order to see the expected
bipolar pulse, further studies have to be performed to
determine the transfer function of the sensors.
V. CONCLUSIONS
Detailed understanding of the thermoacoustic sound
generation mechanism and the response of acoustic sen-
sors in Antarctic ice is necessary to design an acoustic
extension for the IceCube neutrino telescope. While in-
situ calibrations in deep South Pole ice are inherently
difficult, different environmental influences can be stud-
ied separately in the laboratory. No change in sensor
response with increasing ambient pressure was found; a
linear increase in sensitivity with decreasing temperature
was observed. Intense pulsed laser beams can be used
to generate thermoacoustic signals in ice which can also
be used as an in-ice calibration source.
ACKNOWLEDGMENTS
We are grateful for the support of the U.S. National
Science Foundation and the hospitality of the NSF
Amundsen-Scott South Pole Station.
This work was supported by the German Ministry for
Education and Research.
REFERENCES
[1] V. S. Berezinsky and G. T. Zatsepin, Phys. Lett. B
28
(1969)
423.
[2] R. Engel, D. Seckel and T. Stanev, Phys. Rev. D
64
(2001)
093010,
arXiv:astro-ph/0101216
.
[3] G. A. Askaryan, At. Energ.
3
(1957) 152.
[4] J. Vandenbroucke et al., Nucl. Instr. and Meth. A (2009),
doi:10.1016/j.nima.2009.03.064,
arXiv:0811.1087
[astro-ph]
.
[5] http://icecube.wisc.edu/
[6] T. K. Gaisser, in Proceedings of the 29th International Cosmic
Ray Conference (ICRC 2005),
arXiv:astro-ph/0509330
.
[7] R. J. Urick,
Principles of underwater sound
, 3rd ed., Peninsula
Publishing (1983).
[8] J.-H. Fischer, Diploma Thesis, Humboldt-Universita¨t zu Berlin
(2006).
[9] B. Semburg et al., Nucl. Instr. and Meth. A (2009),
doi:10.1016/j.nima.2009.03.069,
arXiv:0811.1114
[astro-ph]
.
[10] P. B. Price et al., Proc. Nat. Acad. Sci. U.S.A.
99
(2002) 7844.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
A new method for identifying neutrino events in IceCube data
Dmitry Chirkin
∗
∗
University of Wisconsin, Madison, U.S.A.
Abstract
. A novel approach for selecting high-
quality muon neutrino events in IceCube data is
presented. The rate of air shower events mis-
reconstructed as signal is first reduced via the use of
the geometrical (software) trigger. The final event se-
lection is performed with a machine-learning method,
designed specifically for IceCube data. It takes into
account some generic properties of IceCube events,
e.g., the fact that separation of signal from back-
ground is more difficult (requiring tighter cuts on
the quality parameters) for horizontal rather than
vertically up-going tracks. The method compares
favorably to other techniques in situations with both
high and low simulation statistics.
Keywords
: neutrino search, machine learning, event
selection
I. INTRODUCTION
An important task of a neutrino telescope like IceCube
is identifying extra-terrestrial neutrinos that are inter-
spersed between orders of magnitude higher background
of particles originating in the showers produced by
cosmic rays in the Earth’s atmosphere.
As a first step a high purity atmospheric (plus possible
extraterrestrial) neutrino event sample is selected, with
only a small contaminating fraction of mis-reconstructed
atmospheric muon events. Only neutrinos can cross the
overburden of the Earth in the upward direction; how-
ever, selecting events reconstructed as upward-moving
leaves many mis-reconstructed atmospheric muons in the
sample, improving the ratio of neutrino to contaminating
muon events (initially at
∼ 10
6
) by only a factor of
∼ 100
.
The problem is further exacerbated by a highly uneven
contamination of the mis-reconstructed muons in several
of the analysis variables, most importantly the zenith
angle. This contamination is smaller for up-going direc-
tions and increases for more horizontal tracks, growing
rapidly near and above the horizon. It is therefore
difficult to arrive at an event selection method that
provides optimized cut surfaces simultaneously for all
zenith angles. Splitting the cut optimization in different
zenith bins leads to fluctuations of the cut parameters
from one zenith angle bin to the next that are perceived
as unphysical. In situations with limited simulated data,
splitting it in several zenith angle bins is undesirable.
This author has also performed an SVM-based event
selection optimization and found that training the SVM
gets more difficult for zenith angle ranges extending
above the horizon.
The above considerations led to the development of
a new framework for selecting and applying cuts on
quality parameters, that in the following is called “Subset
Browsing Method”, or
SBM
for short.
The quality parameters used with the event selection
method of this paper build upon those discussed previ-
ously [1].
II. SIMPLE EXAMPLE
First consider a simple example employing only two
parameters that select events with lower background
contamination for lower value of the quality parameter.
These can be, e.g., zenith angle (0 degrees for up-
going to 90 for horizontal tracks) and estimated angular
resolution (e.g., describing the half-width of the like-
lihood function at the minimum corresponding to the
reconstructed track direction). Both of these can be used
to remove the background of mis-reconstructed events,
one through the basic reconstruction property, and the
other though our prior knowledge that the contamination
is higher for tracks near the horizon.
The toy simulated events are divided into two groups
(randomly): the training set that is used for the training
of the machine, and the testing set, that is used to
judge its performance. Both sets, while drawn from the
same parent distribution, are statistically independent.
The toy “data” events are also simulated and drawn from
a somewhat wider signal distribution (to demonstrate the
effect of cuts in the transitional region between signal
and background). The 3 steps of the machine application
are the steps
1
a
,
1
b
, and 2 as shown on Figure 1.
The training of the machine in this simple example
is achieved by identifying the “outlying” background
events (on the signal side of the distribution), and
creating the “angle cuts” (shown with black straight
lines) by cutting away everything on the rejected sides
of such cuts (i.e., everything above and to the right of
the background event, including that background event
itself).
The cuts so identified will obviously remove all back-
ground events in the training dataset. As seen from the
second row of Figure 1, these cuts do not remove all
of the background events when applied to the testing
dataset, so a further step, here called
1
b
is necessary.
Using the angle cuts derived in step
1
a
a quality param-
eter (
SBM*
) is constructed, which is simply the count
of “angle cuts” of step
1
a
that fall into the bad quadrant
(up and to the right) of a tested event, see Figure 2.
This quality parameter could also be constructed as a
2
CHIRKIN
et al.
A METHOD FOR EVENT SELECTION IN ICECUBE
training
testing
step 1
a
data
step 1
b
step 2
Fig. 1.
Two event populations are shown: red is signal for the
training and simulated testing event sets, and all data in the data set.
Blue points (located up and to the right from red points) describe
background events. Lower values on both x and y mean better quality.
In steps
1
a
and
1
b
empty circles show background events removed by
the
skeleton cuts
, aqua and pink points show background and signal
(or data) events respectively that are removed by the machine quality
parameter cut set at 1.5. In the third column events removed by step
2 are shown as empty circles and events additionally removed by the
quality parameter cut are shown in pink, same as before. The black
lines show the
skeleton cuts
and go through the out-most background
events of the training set.
weighted sum, as described in the following section,
shown for comparison as
SBM
in Figure 4.
A map of the quality parameter (
SBM*
) is shown
in Figure 3. It is clear that through application of the
quality parameter some space is inserted between the
cut structure achieved in step
1
a
and events with quality
parameter greater than 0. Figure 4 shows that a value of
SBM*
=2.5 completely separates signal from background
in the testing dataset of this simple example.
III. UNSIMULATED EVENTS AND STEP 2
After the application of steps
1
a
and
1
b
to the real
IceCube data it became obvious that, although much of
the background events like those present in our sim-
ulation was removed, some “unsimulated” background
remained in data. This affected agreement in parameter
distributions and was particularly evident upon visual
inspection. One class of such events appeared to contain
two or more coincident but independent muon hit pat-
terns that happened near each other with much overlap
in time and failed to be split by the topological trigger.
With more simulation we would most likely have been
able to correctly identify these events.
Another class of events appeared to contain a bright
electromagnetic or hadron shower (
cascade
), with rate of
occurrence higher than that predicted by the simulation.
While these events, when understood, could be very
interesting, making a valuable contribution to the final
event selection, it is unclear at this point whether they
should be classified as signal or background. Moreover,
since they are not simulated as either, the detector
0
1
2
3
4
5
6
7
basic cut variable
quality cut parameter
-8
-6
-4
-2
0
2
4
6
8
-8
-6
-4
-2
0
2
4
6
8
Fig. 2. Shown are events remaining in the testing dataset after the
application of step
1
a
, signal in red, and background in blue. The
legend indicates the value of the
SBM*
quality parameter for the shown
events. For each value of the
SBM*
quality parameter the bad sides
of a representative event are shown with straight lines. The
SBM*
quality parameter is simply the count of the “outlying” background
events of the training dataset that gave rise to the “angle cuts” of step
1
a
(indicated with black squares and black solid lines).
effective area to these events cannot be estimated, so
they do not contribute to any of the physics results.
A common way to deal with this is by raising the
quality parameter of an analysis past the point where all
simulated background is removed and to the point where
an agreement between data and signal simulation is
achieved. This is possible if the quality parameter judges
not only how far a given event is from the background
region, but also how close it is to the signal region.
In the method presented here the quality parameter
is constructed using only the information about the
outlying background events that form the “angle cuts”
(after some initial amount of signal events is carved
out by the step
1
a
). Thus, the quality parameter judges
only the distance to the background region (contrary to
other approaches). To achieve the “similarity with signal
events” another step (called step 2) becomes necessary.
This event selection step is achieved by removing all
data events, to the bad sides of which there are no signal
events of the training set (remaining after the application
of steps
1
a
and
1
b
). The effect of step 2 is demonstrated
in the last column of Figure 1: all data events on the
background side of the signal region are removed.
IV. MULTI-DIMENSIONAL GENERALIZATION
First we re-iterate that the SBM method relies on the
important observation that most of the quality parame-
ters used in the analysis of IceCube data have the follow-
ing property: as the fits become less constrained at lower
number of channels
N
ch
or strings
N
str
(that received
hits), the cuts on the quality parameters necessary to
reach a given signal purity need to be tightened. Alter-
natively, the cuts applied to quality parameters of events
with higher
N
ch
or
N
str
can be relaxed somewhat. A
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
0
1
2
3
4
5
6
7
basic cut variable
quality cut parameter
-8
-6
-4
-2
0
2
4
6
8
-8
-6
-4
-2
0
2
4
6
8
Fig. 3. Map of the quality parameter (
SBM*
) calculated according
to prescription of Figure 2. Highest-quality region is shown in red.
quality parameter SBM
counts
quality parameter SBM*
counts
1
10
10
2
-1
0
1
2
3
4
5
6
1
10
10
2
-1
0
1
2
3
4
5
6
7
8
Fig. 4. Quality parameters: simple sum over the “angle cuts” (
SBM*
),
and weighted sum (
SBM
). Red solid and blue dotted lines show the
distribution of signal and background events, respectively.
similar behavior of cuts on quality parameters can be
argued for their dependence on the reconstructed zenith
angle
θ
: at angles closer to the horizon the number of
background events seeping through is higher than for
tracks going up closer to the vertical, so to reach the
same purity the cuts on the quality parameters need to
be tighter for events with higher reconstructed zenith
angle
θ
. To summarize, we introduce the following
Basic cut property
: the cuts necessary to reach the
same signal purity satisfy the following conditions:
c(θ
⋆
, N
ch
, N
str
) ≤ c(θ
0
, N
ch
, N
str
)
for
θ
⋆
≥ θ
0
c(θ, N
⋆
ch
, N
str
) ≤ c(θ, N
0
ch
, N
str
)
for
N
⋆
ch
≤ N
0
ch
c(θ, N
ch
, N
⋆
str
) ≤ c(θ, N
ch
, N
0
str
)
for
N
⋆
str
≤ N
0
str
This relies on the
assumption
that lower cut values
imply tighter cuts
1
. Parameters
θ
,
N
ch
, and
N
str
that
allow such a behavior of cuts are in the following
1
Some of the quality parameters may need to be taken with a minus
sign or as one over their value to satisfy this assumption
called
basic cut variables
. The following discussion is
simplified with a
Definition
: a cut
c
⋆
defined for a set of events with
θ
⋆
,
N
⋆
ch
, and
N
⋆
str
is said to be operating on a
subset
of events of a cut
c
0
defined for a set of events with
θ
0
,
N
0
ch
, and
N
0
str
if
θ
⋆
≥ θ
0
,
N
⋆
ch
≤ N
0
ch
, and
N
⋆
str
≤
N
0
str
.
Main cut property
: a cut operating on a given set of
events also operates on all its subsets.
To rephrase, a cut
c
0
defined for a set of events with
θ
0
,
N
0
ch
, and
N
0
str
also applies to any set of events with
θ
⋆
,
N
⋆
ch
, and
N
⋆
str
(that has its own cut
c
⋆
), if
θ
⋆
≥ θ
0
,
N
⋆
ch
≤ N
0
ch
, and
N
⋆
str
≤ N
0
str
. To prove we need to
show that
c
0
≥ c
⋆
. Using 2 intermediate sets of events,
and the basic cut property introduced above
c
0
= c(θ
0
, N
0
ch
, N
0
str
) ≥ c(θ
⋆
, N
0
ch
, N
0
str
) ≥
c(θ
⋆
, N
⋆
ch
, N
0
str
) ≥ c(θ
⋆
, N
⋆
ch
, N
⋆
str
) = c
⋆
This property allows us to consider all cuts as op-
erating not only on events with
θ = θ
0
,
N
ch
= N
0
ch
,
and
N
str
= N
0
str
, but rather on all events with
θ ≥ θ
0
,
N
ch
≤ N
0
ch
, and
N
str
≤ N
0
str
.
Definition
: The cut
c
0
associated with
θ
0
,
N
0
ch
, and
N
0
str
is considered
redundant
if there exists another cut
c
⋆
associated with some other
θ
⋆
,
N
⋆
ch
, and
N
⋆
such
that
c
⋆
≤ c
0
,
θ
⋆
≤ θ
0
,
N
⋆
ch
≥ N
0
ch
, and
N
⋆
str
≥ N
0
str
.
This is because the new cut
c
⋆
clearly implies
c
0
by
the main cut property.
For each background event
i
b
in the simulated training
dataset its quality parameters are used to create
n
cuts
associated with
θ
i
b
,
N
i
b
ch
, and
N
i
b
str
of the event. The
signal purity
p
i
b
= s
i
b
/(s
i
b
+ b
i
b
)
of the events with
θ ≥ θ
i
b
,
N
ch
≤ N
i
b
ch
, and
N
str
≤ N
i
b
str
is then calculated
and used to find the
i
b
that defines cuts in a region
with the worst purity. Out of the
n
cuts associated
with
i
b
the cut that results in a smallest loss of signal
events is then chosen and applied to the whole subset
on which this cut operates. To accelerate this process if
a cut in encountered that removes no signal events it is
immediately used without taking into account the purity
of the subset of events on which this cut operates.
This procedure is then repeated until the background
events in the simulated training dataset are exhausted. At
that point all cuts of all background events are cycled
through once again, and those that result in no further
loss of remaining signal events (which are said to form
a
core
or signal events) are saved into the
trained cut
set
of the machine. One can further
reduce
this set by
removing the
redundant
cuts from it, thus resulting in
an
irreducible trained cut set
, which is the result of this
machine training procedure.
One can obviously remove all background events
(i.e., reach a 100% signal purity) by applying all cuts
from the
irreducible trained cut set
to the simulated
training dataset. However, when the same is applied
to the separately generated
testing
dataset a number of
4
CHIRKIN
et al.
A METHOD FOR EVENT SELECTION IN ICECUBE
background events seep through and the signal purity
never reaches 100%.
This may happen, e.g., if we encounter a background
event with, say,
N
⋆
ch
higher than
N
i
b
ch
of every back-
ground event in the simulated training dataset, thus there
are no cuts available that would remove such an event
from the testing dataset.
A way around this is to find at least one cut
c
with
θ
c
≥ θ
⋆
,
N
c
ch
≤ N
⋆
ch
, and
N
c
str
≤ N
⋆
str
such that
c
⋆
= q
⋆
≤ c
(
q
⋆
being the quality parameter of the
tested event). By the
main cut property
the cut that would
achieve the same purity
p
c
on the subset defined by
θ
⋆
,
N
⋆
ch
, and
N
⋆
str
as the cut
c
on the subset defined by
θ
c
,
N
c
ch
, and
N
c
str
is necessarily no more tight as the cut
c
.
That is, applying the cut
c
on the subset defined by
θ
⋆
,
N
⋆
ch
, and
N
⋆
str
achieves at least the same or higher level
of purity as
p
c
. Now, if an event defined by its quality
parameters
c
⋆
= q
⋆
but at the
basic cut variables
θ
c
,
N
c
ch
, and
N
c
str
of the cut
c
is passed by the
trained cut
set
, cut
c
is called the
purity cut
defined for the original
event (defined by its own
c
⋆
,
θ
⋆
,
N
⋆
ch
, and
N
⋆
str
).
Existence of at least one such cut
c
for each of the
testing dataset events passed by the machine guarantees
that the purity in the regions with extrapolated
θ
,
N
ch
,
and
N
str
is at least as good or better than in the regions
for which background events existed in the simulated
training dataset. This is an important advantage of the
discussed method compared to the other machine learn-
ing techniques.
Counting all
purity cuts
available for a given testing
dataset event provides one with an important machine
quality parameter which value is higher for events that
are more likely to be signal and lower for events that are
more likely to be background. It appears that a cut on
this parameter improves the purity in all subsets by equal
amount. In order to improve the purity in all subsets
to the same final value one may weight the terms in
the quality parameter sum with the initial purity of the
simulated training dataset on the subsets of the cuts used
in the sum (thus leading to the weighted sum definition
of SBM, as shown in Figure 4).
We call the machine learning method described here
the
subset browsing method
because of the technique in
which one has to
browse
through the
subsets
on which
the cuts of the
trained cut set
are defined to calculate
the quality parameter separating signal from background.
The quality parameter itself is called the
SBM quality
parameter
:
S B M
.
The
irreducible trained cut set
forms a “skeleton” of
cuts that are applied to the testing dataset achieving the
initial SBM cut level
:
SBM = 0
. The
SBM
quality
parameter is usually normalized so that the highest value
of
SBM
of a background event in a simulated testing
dataset is 1 (e.g., in the plot of the
SBM
in Figure 5).
V. CONCLUSIONS AND OUTLOOK
We present a new framework for selecting and apply-
ing cuts on the quality parameters, here called SBM. It is
neutrino signal
corsika single
coincident showers
data, SBM step 1
data, SBM step 2
log
10
(SBM)
entries
10
-1
1
10
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
Fig. 5. Distribution of the quality parameter after step
1
a
(for 1 year
of IC-22 data). The
>90%
estimated purity is achieved at SBM=0.36.
particularly well-suited for application to IceCube data
analysis as it takes into account (both during training
and for event classification) some obvious relationships
between the quality parameters, which are hardwired
into the algorithm.
The SBM appears to separate signal and background
well even if the number of simulated training dataset
events is low. The SBM extrapolation behavior is very
robust and the performance of extrapolation improves
as the simulated testing dataset statistics increases, since
more
purity cuts
become available to testing events in
regions less populated by cuts of the
trained cut set
.
The machine will not go around individual background
events of the training dataset (a condition that may occur
in other methods) because the cuts, by construction, are
monotonous functions of the basic parameters.
Additionally, the learning method itself is very simple
and has virtually no parameters to set. Most of the
machine is implemented with only an application of the
≤
operator (and a simple summation for estimating the
quality parameter).
The method splits the classification of the data events
into two steps: dissimilarity with background, and sim-
ilarity with signal, thus allowing one to investigate
possible “unsimulated” classes of events.
This method was used to identify atmospheric neu-
trino events in IceCube data taken during the 2007
operation season (see Figure 5 and reference [2]).
As an outlook, this method may be well-suited to
analyzes that depend on a veto region around the in-
teresting events in the detector, as most of the cuts on
the quality parameters can be relaxed as the veto region
is expanded, thus satisfying the basic cut property if veto
size is chosen as a basic cut variable.
The method could be further improved in the future
by implementing a technique similar to boosting of the
BDT.
REFERENCES
[1] D. Chirkin, et al.,
Effect of the improved data acquisition system
of IceCube on its neutrino-detection capabilities
, 30th ICRC,
Merida, Mexico (arXiv:0711.0353)
[2] D. Chirkin, et al.,
Measurement of the atmospheric neutrino
energy spectrum with IceCube
, these proceedings
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Muon Production of Hadronic Particle Showers in Ice and Water
Sebastian Panknin
∗
, Julien Bolmont
†
, Marek Kowalski
∗
and Stephan Zimmer
‡
∗
Humboldt-Universita¨t zu Berlin, Germany
†
LPNHE, Universite Pierre et Marie Curie Paris 6, Universite Denis Diderot Paris 7, CNRS/IN2P3, France
‡
Humboldt-Universita¨t zu Berlin, Germany, now at Uppsala Universitet, Sweden
Abstract
. One of the neutrino signatures in
Cerenkov neutrino detectors are isolated, particle
showers induced by neutrinos of all flavors. Hadronic
showers can produce muons during the shower
development and the appearance of the showers can
change significantly by such high-energy muons. We
use a modified version of the air shower simula-
tion program CORSIKA for the simulation of the
generation of muons in salt water. We discuss how
the results can be applied for ice. In addition, a
simple analytical model is derived, that provides
scaling relations for the muon energy spectrum and
its dependence on the primary particle.
Keywords
: shower-simulation, muons, neutrino-
detection
I. INTRODUCTION
Large neutrino telescopes in ice or water, like
IceCube[1], Baikal[2] and ANTARES[3], are in opera-
tion or construction. They detect Cerenkov light from
particles created by neutrino interactions. The most
prominent signature are up-going muons generated by
charge-current interactions of muon neutrinos. Another
possibility is the search for neutrino induced cascades
[4]. Such a search is sensitive to electron and tau neu-
trinos. In addition, unlike a search for neutrino induced
muons, one is not restricted to up-going tracks, since
the signature of isolated cascades allows in principle
good separation from the down-going atmospheric muon
background. A requirement for a search of neutrino
induced events is their accurate simulation.
In hadronic cascades also muons can be produced.
Due to their track-like light signature they may influence
the shape and thus should be simulated and parameter-
ized.
We develop a simple analytical calculation for the
produced muon flux (section II). We show the results of
a full shower simulation with a modified version of the
well-known air-shower simulation program CORSIKA
[5] (section III) and use this to parameterize the muon
flux (section IV). We conclude in section V.
II. ANALYTICAL MODEL FOR THE MUON FLUX
The basic properties of electro-magnetic showers can
be understood through the Heitler model [6]. After one
interaction length a photon creates an electron-positron
pair, and an electron/positron radiates a bremsstrahlung
photon. This repeats every interaction length while the
energy is distributed over the generated particles, so that
more and more particles with lower and lower energies
are created. Hadronic showers are more complicated,
since in each interaction a wide range of hadronic
particles can be created, which through decay increase
the complexity further. However, expanding the simple
Heitler model helps to get a basic understanding of the
muon generation in hadronic cascades.
We follow the extended Heitler model of [7] and
consider a hadronic shower generated by a particle of
primary energy
E
0
. In each interaction
N
mul
≈ 10
hadrons are produced. About one third of them will
be neutral hadrons like
π
0
, which will lead to an
electromagnetic sub-shower.
So in the generation
n
we have
N
H
(n)
hadrons with
average energy
E
H
:
N
H
(n) =
?
2
3
N
mul
?
n
(1)
E
H
(n) =
E
0
N
n
mul
(2)
Combining this leads to the energy depending number
of hadrons per generation
∆N
H
∆ log
N
mul
(E
0
/E
H
)
(E
H
) =
?
E
H
E
0
?
κ
with
κ := 1 + log
N
mul
2
3
(3)
and to the hadronic flux
dN
H
dE
H
(E
H
) =
ln N
mul
E
0
?
E
H
E
0
?
(κ+1)
.
(4)
Next we consider different types of hadrons
h
. In the
following, we will keep using
H
if a variable is meant for
all hadrons. We neglect the different reaction channels
for different hadrons and assume a constant branching
ratio
B
h
for production of a hadron
h
, independent of
the incident type. (We mainly focus on pions and kaons.)
The muon flux can now be derived as the hadron
flux multiplied with the decay probability
P
h→µ
and
folded by the energy distribution of the generated
muon
dn
h
dE
µ
(E
µ
, E
h
)
:
dN
µ
dE
µ
(E
µ
) =
?
h
B
h
?
∞
0
dn
h
dE
µ
(E
µ
, E
h
) P
h→µ
dN
H
dE
H
(E
h
) dE
h
(5)
2
SEBASTIAN PANKNIN
et al.
MUONS FROM PARTICLE-SHOWERS IN ICE
TABLE I
USED NUMERICAL VALUES FOR PIONS AND KAONS. THEIR
CONTRIBUTION
A
TO THE AMPLITUDE AS WELL AS THE FULL
AMPLITUDE ARE PROVIDED.
h
B
h
b
h→µ
α
r
h
A
h
[GeV
1
]
π
0.9
1.00
67.1
5.73 · 10
1
20.03 · 10
3
K
0.1
0.64
9.03
4.58 · 10
2
6.00 · 10
3
A =
P
h
A
h
:
26.30 · 10
3
The decay probability of a hadron with mass
m
h
and
lifetime
τ
h
and a branching ratio
b
h→µ
for the decay
into muons is given by
P
h→µ
= b
h→µ
Λ
λ
D
=
b
h→µ
1+ α
h
E
h
≈
b
h→µ
α
h
E
h
with
α
h
:=
τ
h
m
h
λ
I
,
(6)
where
Λ :=
1
λ
I
+
1
λ
D
,
λ
I
is the interaction length of the
hadron and
λ
D
=
E
h
τ
h
m
h
is the decay length, assuming
E
h
≫ m
h
. The approximation holds for
α
h
E
h
≫ 1
.
The probability to create a muon of energy
E
µ
from
the decay of a hadron with energy
E
h
is given by
the energy distribution
dn
h
dE
µ
(E
µ
, E
h
)
. Unpolarized me-
son performing a two-body decay will generate mono-
energetic muons isotropic distributed over the directions
in their rest-frame (see [8]). This transforms in the
laboratory system to a constant distribution between the
minimal energy
r
h
E
h
with
r
h
=
m
2
µ
m
2
h
and the maximal
energy
E
h
:
dn
h
dE
µ
(E
µ
, E
h
) =
?
1
(1 r
h
)E
h
r
h
E
h
≤ E
µ
≤ E
h
0
else
(7)
Applying eq. (4), (6) and (7) on eq. (5) we obtain the
muon flux
dN
µ
dE
µ
(E
µ
) = A
?
E
0
GeV
?
κ
?
E
µ
GeV
?
(κ+2)
with
A =
ln N
mul
κ +2
?
h
B
h
b
h→µ
α
h
1 r
κ+2
h
1 r
h
?
1
GeV
2
.
(8)
This results in the numerical values for the amplitude
A = 26.3 · 10
3
GeV
1
and for the exponent according
to eq. (3)
κ = 0.824
. The values used for pions and
kaons are summarized in Table I.
III. CORSIKA SIMULATION
For the simulation we used a modified version of
CORSIKA based on the official version 6.2040 which
enables shower simulation in salt water (see [9]). The
used interaction models are Gheisha for low energies
and QGSJet 01 for high energies. We simulated 1000
showers at primary energy
E
0
= 1 TeV
, 1000 at
10 TeV
, 100 at
100 TeV
and ten at
1 PeV
with a proton
as primary particle.
The used configuration (Fig. 1) is a CORSIKA obser-
vation level
9 m
behind the interaction point. Here the
shower is expected to be fully developed, while only the
very low energy muons will already be decayed.
μ
p
salt water
9m
Observation Level
Fig. 1. CORSIKA configuration: the incoming proton
p
interacts
9 m
above the observation level. The produced muon
µ
will roughly have
the same direction as the incoming particle and are recorded, as they
pass through the observation level.
IV. DERIVED MUON FLUX
With the simulation data we calculate the muon flux.
To make this comparable to our model we need to
calculate the muon energy at generation
E
µ
from the
muon energy at observation level
E
obs
µ
given in the
simulation. This is a small correction, important only for
low energetic muons. Muons that are most interesting
for us are those with track-lengths above about
10 m
,
namely those with a range bigger than the typical shower
size. These muons have energies above
3 GeV
, which is
high enough, to reduce systematic error due to the energy
correction.
The energy loss can be approximated by:
1
̺
dE
µ
dx
= a bE
µ
(9)
where the medium density is
̺ = 1.02gcm
3
and the
interaction constants are
a = 2.68 MeVcm
2
g
1
and
b =
4.7 · 10
6
cm
2
g
1
(see [10]). Solving this provides the
formula for the energy at generation:
E
µ
=
?
E
obs
µ
+
a
b
?
e
̺bx
a
b
(10)
Here we choose
x = 7 m
, the distance from first
interaction point to the observation level reduced by
about three radiation length [10].
We show the normalized flux
E
2
µ
E
κ
0
dN
µ
dE
µ
(E
µ
)
which
should follow according to eq. (8) a power law with
small primary energie dependency (Fig. 2). The power
law was fitted for the curves individually (Tab. II). This
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
leads to the averaged parameterization:
dN
µ
dE
µ
= A
?
E
0
GeV
?
κ
?
E
µ
GeV
?
(2+κ)
A = (3.5 ± 0.5(
stat
) ±
6.5
2.0
(
sys
)) · 10
3
1
GeV
κ = 0.97 ± 0.07 (
stat
) ± 0.12 (
sys
).
(11)
The systematics were estimated by using different
x
-
values for the energy correction.
0.4 0.6 0.8 1.0 1.2
1.4
1.6 1.8 2.0
log
10
(E
/GeV)
10
-5
10
-4
10
-3
10
-2
E
2
✁
E
0
d
✁
N
d
✁
E
[
1
/
GeV
2
]
1 TeV
10 TeV
100 TeV
1 PeV
Fit 1 TeV
Fit 10 TeV
Fit 100 TeV
Fit 1 PeV
Fig. 2. The muon flux
d N
µ
dE
µ
is shown as function of muon energy
E
µ
. We multiplied by
E
2
µ
E
0
to remove the primary energy dependency
by some extend and improve readability. The results of the fit are given
in Table II.
Using the integral representation (Fig. 3), we can see
that on average a hadronic cascade of
100 TeV
produces
e.g. about one muon with an energy above
10 GeV
. Such
a muon would have a track length of about
36m
.
CORSIKA provides the information if a muon was
generated from a pion. Using this information, Fig. 4
shows the number of pion produced muons over all
muons as a function of the energy. As expected, other
0.4 0.6 0.8 1.0 1.2
1.4
1.6 1.8 2.0
log
10
(E/GeV)
10
-3
10
-2
10
-1
10
0
10
1
10
2
10
3
✂
N
(
✂
E
>E
)
1 TeV
10 TeV
100 TeV
1 PeV
E
0
=1 TeV Fit
E
0
=1 PeV
Fig. 3. Integral muon flux
N
µ
(E
µ
> E )
. The fit is taken from the
differential muon flux (Fig. 2 and Table II).
TABLE II
PARAMETERIZATION OF MUON FLUX: THE PARAMETERIZATION
GIVEN IN (11) WAS FITTED TO THE MUON FLUX INDIVIDUALLY
FOR THE DIFFERENT PRIMARY ENERGIES (S. FIG. 2).
E
0
10
3
·A [
1
GeV
]
κ
10TeV
4.731
±0.995
1.007
±0.104
100TeV
3.020
±0.703
0.840
±0.113
1TeV
9.457
±6.792
1.242
±0.371
1PeV
3.089
±0.965
1.091
±0.156
average
3.490
±0.492
0.971
±0.068
production mechanisms (e.g. kaons) become more im-
portant with higher energy. However the constant hadron
fractions
B
h
are a reasonable approximation. Our simple
model with only pion and kaon (
B
K
= 0.1
) predicts a
constant
N
π
µ
N
µ
= 77%
.
0.0
0.5
1.0
1.5
2.0
log
10
(E
✄
/GeV)
0.0
0.2
0.4
0.6
0.8
1.0
π✆
N
✆
/N
1 TeV
10 TeV
100 TeV
1 PeV
model
Fig. 4.
Muon parent: the ratio of number of muons produced by
pions
N
π
µ
over all number of muons
N
µ
for each muon energy
E
µ
is shown. A slight increase with energy of muons produced by other
parent hadrons can be observed. However, the comparison to the model
shows that the constant fraction is a reasonable approximation.
V. DISCUSSION AND CONCLUSION
We developed an analytical model describing the
muon production in hadronic cascades, showing that it
follows a power law with exponent
(2 + κ)
. The am-
plitude scales linear with the medium density
̺
and the
primary energy
E
κ
o
. This is an effect of approximately
10 %
in amplitude for changing from salt water to ice.
We compared this model to a full shower simulation
with a modified CORSIKA version. The used setup
forced us to correct for energy losses. To minimize the
influence of the correction and because of the short track
length of low energetic muons we focus on energies
above
3 GeV
.
We fitted a power law to the results (Fig 2 and Table
II) and get the parameters with systematic errors due
to the energy correction:
A = (3.5 ± 0.5 (
stat
) 2.0 +
6.5(
sys
)) · 10
3
GeV
1
and
κ = 0.97 ± 0.07(
stat
) ±
0.12(
sys
).
Around an energy of
E
µ
= 10GeV
and a
primary energy
E
0
= 10 TeV
, the value most important
to us, the analytical model predicts a flux a factor three
higher than the flux from the simulation results.
4
SEBASTIAN PANKNIN
et al.
MUONS FROM PARTICLE-SHOWERS IN ICE
This study could be checked with a newer modified
CORSIKA version [11], which would provide direct
results for ice and the possibility to compare different
hadronic interaction models and increase the statistic.
The parameterization of muon production in hadronic
cascades that we provide can be used to simulate more
accurately neutrino induced cascades.
VI. ACKNOWLEDGEMENTS
We would like to thank Terry Sloan for providing
us with his modified version of CORSIKA. Sebastian
Panknin and Marek Kowalski acknowledge the support
of the Deutsche Forschungsgemeinschaft (DFG).
REFERENCES
[1] THE ICECUBE COLLABORATION: J. AHRENS
et al. Ice-
Cube Preliminary Design Document
http://www.icecube.wisc.
edu/science/publications/pdd/pdd.pdf 2001
[2] G.V. DOMOGATSKII (for THE BAIKAL COLLABORATION),
Sta-
tus of the BAIKAL neutrino project
, arXiv:astro-ph/0211571v2,
2002
[3] Y. BECHERINI (for THE ANTARES COLLABORATION)
Status
report of the ANTARES experiment
J.Phys.Conf.Ser.39:444-446,
2006
[4] AMANDA COLLABORATION: M. ACKERMANN,
et al.
Search for Neutrino-Induced Cascades with AMANDA
Astropart.Phys.22:127-138,2004
[5] D. HECK and T. PIEROG,
Extensive Air Shower Simulation
with CORSIKA: A Users Guide
http://www-ik.fzk.de/corsika/
usersguide/usersguide.pdf Version 6.9xx from April 3, 2009
[6] W. HEITLER,
The Quantum Theory of Radiation
, third edition.
Oxford University Press, London 1954
[7] J. MATTHEWS,
A Heitler model of extensive air showers
, As-
troparticle Physics 22 (2005), 387-397
[8] T. K. GAISSER,
Cosmic Rays and Particle Physics
, Cambridge,
1990
[9] S. BEVAN
et al.
.,
Astroparticle Physics
, 28, 366, 2007
[10] PARTICLE DATA GROUP,
Particle Physics Booklet
, Berkeley,
2006
[11] J.
BOLMONT,
CORSIKA-IW:
a
Modifed
Version
of
CORSIKA
for
Cascade
Simulations
in
Ice
or
Water
,
http://nuastro-zeuthen.desy.de/e27/e711/e745/
infoboxContent746/corsika-iw
\
julien
\
0812.pdf, 2008
PROCEEDINGS OF THE 31
st
ICRC, ?OD´ Z´ 2009
1
Muon bundle energy loss in deep underground detector
Xinhua Bai
?
, Dmitry Chirkin
y
, Thomas Gaisser
?
, Todor Stanev
?
and David Seckel
?
?
Bartol research Inst., Dept. of Physics and Astronomy, University of Delaware, Newark, DE 19716, U.S.A.
y
Dept. of Physics, University of Wisconsin, Madison, WI 53706, USA
Abstract
. High energy air showers contain bundles
of muons that can penetrate deep underground.
Study of these high energy muons can reveal the
cosmic ray primary composition and some features
of the hadronic interactions. In an underground neu-
trino experiment like IceCube, high energy muons
are also of interest because they are the dominant
part of the neutrino background. We study muon
bundle energy loss in deep ice by full Monte Carlo
simulation to de?ne its ?uctuations and relation to
the cosmic ray primary nuclei. An analytical formula
of muon bundle mean energy loss is compared with
the Monte Carlo result. We also use the simulation
to set the background for muons with catastrophic
energy loss much higher than those of normal muon
bundles.
Keywords
: cosmic ray, muon bundle, energy loss.
I. INTRODUCTION
When high energy cosmic ray particles interact in the
atmosphere and develop extensive air showers (EAS),
muons are produced through the decay of mesons,
such as pions and kaons. IceCube is primarily a high
energy neutrino telescope but it can also study high
energy muon bundles which, together with the EAS
size determined by the surface array IceTop [1], will
contribute to studies of the cosmic ray composition and
hadronic interactions.
A proton shower contains one muon above 500 GeV
when the primary energy is about 100 TeV. Muons
with that energy can reach the underground detector. At
higher energies more high energy muons are produced
in bundles. The muons above a several hundred GeV in
the bundle are highly collimated and closer to each other
in space [2], [3] than the distance between IceCube
strings. This makes it very hard to count the number of
individual muons in the bundle.
In this work we study the energy loss of muon bundles
produced by proton and iron showers at different primary
energies by carrying out a Monte Carlo simulation. An
analytical formula of muon bundle mean energy loss
is compared with the Monte Carlo result in Section II.
We discuss the ?uctuations in the energy loss and their
association with cosmic ray primary particles in Section
III. With the energy loss limits set by the simulation, we
also discuss in Section IV the search for muons above
100 TeV.
We ?rst calculated proton and iron showers with
CORSIKA (version 6.735, with SIBYLL 2.1 as high
energy hadronic interaction model). This version of
CORSIKA does not include charm production. All the
muons produced are from pion or kaon decay. Two
hundred showers were generated for eight energy points
from 1 PeV to 1 EeV for proton and iron primaries. The
production was set for the South Pole location (2835 me-
ters above sea level). The atmospheric parametrization
for July 01, 1997 was chosen. All muons above 200 GeV
in each shower were extracted from the shower ground
particle ?le. Each of these muons was then propagated
through the ice to the depth of 2450 meters using the
Muon Monte Carlo (MMC) simulation package [4]. All
processes in which a muon loses more than 10
3
of
its energy are treated stochastically. Energy losses due
to ionization, bremsstrahlung, photo-nuclear interaction,
pair production and their continuous components were
kept for each of the ?ve-meter step size along the muon
track.
II. HIGH ENERGY MUONS IN AIR SHOWERS: ELBERT
FORMULA AND CORSIKA SHOWERS
The number of high energy muons in an EAS depends
on the energy and mass of the primary cosmic ray
particle. A well-known parametrization was given by
Elbert [5] as follows:
N
?;B
(E
?
> E
?
(0))
= A
0:0145T eV
E
?
(0)cos?
(
E
0
AE
?
(0)
)
p
1
? (1
AE
?
(0)
E
0
)
p
2
'
0:0145T eV
cos?
E
p
1
0
A
p
1
1
E
?
(0)
p
1
1
(1 p
2
A
E
0
E
?
(0))
(1)
in which A, E
0
and ? are the mass, total energy and
zenith angle of the primary nucleus. p
1
= 0:757 and
p
2
= 5:25. The approximation only keeps the ?rst two
terms in the Taylor's expansion on (1
A
E
0
E
?
(0))
p
2
. It
will be used later in this paper to derive the mean energy
loss of muon bundles in matter.
Both Elbert parametrization and the approximation
were compared with air shower Monte Carlo results for
protons and irons from several hundred TeV up to 1
EeV. Two examples are shown in Fig. 1. Except in the
threshold region (where the Elbert parametrization may
not be valid) the agreement over the whole energy region
is remarkable. This is shown by the two peaks at zero
in the two plots at the bottom in Fig. 1. One can also
see that CORSIKA with SIBYLL 2.1 has slightly higher
yield at higher muon energies. The excess increases with
the energy of the primary particle.
2
X. BAI
et al.
MUON BUNDLE ENERGY LOSS ...
E
m
(TeV)
10
-1
1
10
10
2
10
3
)
m
(>E
m
N
10
-4
10
-3
10
-2
10
-1
1
10
10
2
10
3
10
4
10
5
N
CORSIKA
(>E
mu
)-N
Elbert
(>E
mu
)
-20
-15
-10
-5
0
5
10
15
20
Number of Entries
0
100
200
300
400
500
600
N
CORSIKA
(>E
mu
)-N
Elbert
(>E
mu
)
-20
-15
-10
-5
0
5
10
15
20
Number of Entries
0
100
200
300
400
500
600
700
Fig. 1. Number of muons in the bundles as a function of the muon
energy. Top entry: 50 PeV proton at 30 degree zenith (lower black
+) and 1 EeV iron at zero degree zenith (higher red ?). The open
squares and circles are the averages over all 200 showers at each
energy points. The curves represent Elbert formula. Dashed lines are
the approximation to the Elbert formula, which correspond to the last
line in Eq. ( 1). Bottom entries: The difference between the number
of muons (for E
?
? 100 GeV and a increment step size of 100 GeV)
in CORSIKA showers and that predicted by the Elbert formula. (left:
50 PeV proton, right: 1 EeV iron)
III. MUON BUNDLE ENERGY LOSS AND
COMPOSITION SENSITIVITY
A. Muon bundle energy loss in ice
Since IceCube measures the energy loss instead of the
number of muons in the bundle, we derive an analytical
expression for the muon bundle mean energy loss to
understand better the bundle energy loss. Starting from
the approximation in Eq. ( 1) with muon energy loss
dE
?
(X)
dX
ˇ a b ? E
?
(X);
(2)
the energy loss of a muon bundle at the slant depth X
is an integral over the energy loss of the muons in the
bundle:
dE
?;B
(X)
dX
=
R
E
max
?
(X)
E
min
?
(X)
dE
?
(X)
dX
dN
?;B
(X)
dE
?
(X)
dE
?
(X)
=
R
E
max
?
(X)
E
min
?
(X)
[a + b ? E
?
(X)] ?
dN
?;B
(X)
dE
?
(X)
dE
?
(X):
(3)
Here, E
min
?
(X) and E
max
?
(X) represent the possible
minimum and maximum energy of muons in the bundle
at slant depth X. They can be written as follow [6]:
E
min
?
(X) = maxf[(E
?
(0) +
a
b
)e
bX
a
b
]
min
; 0g
=0
E
max
?
(X) = (E
max
?
(0)+
a
b
)e
bX
a
b
= (
E
0
A
+
a
b
)e
bX
a
b
:
(4)
The approximate mean energy loss of a muon bundle
can be obtained by doing the integration [6]. The ex-
pression can be further simpli?ed by assuming that the
high energy corrections can be ignored:
dE
?;B
(X)
dX
= ! ? b ? (p
1
+1)
1
V +1
[
1
p
1
+1
(
a
b
)
p
1
?V
p
1
1
+
1
p
1
(
a
b
)
p
1
V
p
1
1
p
1
+1
(
a
b
)
?(
E
0
A
)
p
1
1
1
p
1
(
E
0
A
)
p
1
]
(5)
in which ! =
0:0145T eV
cos?
E
p
1
0
A
p
1
1
and V = (e
bX
1). For
the ice at the South Pole, a = 0:26 GeV mwe
1
and
b = 3:60 ? 10
4
mwe
1
[4] in units of meter water
equivalent mwe. X is the slant depth in mwe along the
muon bundle track.
The comparison with the full Monte Carlo can be seen
in Fig. 2. Despite the large ?uctuations in the energy loss
mainly due to bremsstrahlung the mean energy loss in
the full Monte Carlo (the blue and green dots) can be
well described by the analytic approximation of Eq. ( 5).
It needs to be pointed out, however, that for low energy
showers, or for the energy loss at larger slant depth, Eq.
( 5) can have larger offset from the mean Monte Carlo
values. This is due to the fact that the Elbert formula
(Eq. ( 1)) is not exact at high E
?
values or a and b can
be different from the constant values being used here.
Slant Depth (m)
1000 1200 1400 1600 1800 2000 2200 2400 2600 2800
/dX(GeV/m)
tot.
dE
1
10
10
2
10
3
10
4
10
5
Fig. 2.
Muon bundle energy loss as function as slant depth. Two
examples are given in the ?gure: red ? for vertical 1 EeV iron showers
and the black + for 30 degree 50 PeV proton showers. Open squares
and circles are the mean value of the Monte Carlo results for iron and
proton showers. Eq. ( 5) is represented by the two curves.
B. Muon bundle energy loss and composition
To study the muon bundle energy loss and composi-
tion sensitivity, in the two component case of this work,
PROCEEDINGS OF THE 31
st
ICRC, ?OD´ Z´ 2009
3
we ?rst de?ne the composition resolving parameters
(AjY ) and (BjY ) based on an observable Y as follows:
(AjY ) =
P
i=1
N
A
i
=(N
A
i
+ N
B
P
i
)
i=1
P
A
i
(6)
(BjY ) =
P
i=1
N
B
i
=(N
A
i
+ N
B
P
i
)
i=1
P
B
i
(7)
in which P
A
i
= 1:0 (or = 0:0) when N
A
i
6= 0:0 (or
= 0:0), and N
A
i
and N
B
i
represent the number of proton
and iron events in the i
th
bin on dN=dY distribution as
shown in Fig. 3. (AjY ) and (BjY ) have values between
0 and 1. When the two distributions are well separated
from each other, both (AjY ) and (BjY ) are equal to
1. When the two distributions are identical and fully
overlapped, (AjY ) = (BjY ) = 0:5, which means the
chance to assign a particle as proton or iron is 50%.
N
N
1
i
N i
N k
1ki
N
k
A
A
B
B
B
dN/dY
Y
Y
0
5
10
15
20
25
30
35
40
dN/dY
0
500
1000
1500
2000
2500
3000
3500
4000
P: ( 1.0, 15.0, 4.0)
(P|Y) =0.697
Fe:( 1.0, 25.0, 4.0)
(Fe|Y)=0.579
Fig. 3.
Left: De?nition of composition resolving parameters used
in this work, (AjY ) and (BjY ): Two histograms represent the
distribution of variable Y (e.g. muon bundle energy loss in deep ice
detector) for two types of primary particles (say proton and iron).
Right: Particles A and B are represented by two Gaussian shaped
distributions with the amplitude, mean and ˙ in the parentheses.
Particle A and B each has 320000 and 80000 samples in this test.
The value of the parameter also depends on the
frequency of the detectable signals of different particles.
This can be seen in Fig. 3 (right-hand panel), in which
(AjY ) = 0:697 and (BjY ) = 0:579. This de?nition
can be easily extended to cases in which each event has
multiple observable variables.
IceCube is sensitive to the Cherenkov light emitted
by high energy charged particles. Simulation shows
the Cherenkov light yield of an in-ice event is nearly
proportional to the particle total energy loss [7]. Since
we are not doing the full experiment simulation, we
use the total energy loss of muon bundles in small
bins along the bundle track as a measure of the signal
size. The distributions of muon bundle energy loss in
?ve-meter steps at the slant depths between 1958 m
and 2010 m in the ice are shown in Fig. 4. Proton
energy loss distribution has a signi?cant overlap with
iron, much bigger than the overlap in the muon number
distributions.
To improve the separation between the energy loss
signals from different nuclei we excluded the large
energy loss events caused mostly by bremsstrahlung. The
effect from eliminating all energy loss events larger than
85% of the average (cut
1
, red) and 500% (cut
2
, blue)
is shown in the bottom panel of Fig. 4.
log
10
[E
tot.loss
/(GeV*5m)]
1
1.5
2
2.5
3
3.5
4
4.5
5
Number of Events
1
10
10
2
P, N
m
Fe, N
m
P, E
tot.loss
Fe, E
tot.loss
log
10
(N
m
)
1
1.5
2
2.5
3
3.5
4
4.5
5
log
10
[E
tot.loss
/(GeV*5m)]
1
1.5
2
2.5
3
3.5
4
4.5
5
Number of Events
1
10
10
2
P, E
tot.loss
, cut_1
Fe, E
tot.loss
, cut_1
P, E
tot.loss
, cut_2
Fe, E
tot.loss
, cut_2
Fig. 4.
Top: Histograms of the number of muons and the muon
bundle total energy loss in ?ve-meter steps at depths between 1958
m and 2010 m in ice. The example is for 600 PeV proton and iron
showers at zenith angle of 30 degrees. Bottom: Histograms of the
muon bundle total energy loss for the same Monte Carlo data sample
after two cuts were applied. See more details in the text.
Fig. 5 summarizes how the composition resolving
parameter varies at different slant depth after these two
cuts in proton and iron showers. Several features can
be seen in this ?gure, (1) the tighter cut (cut
1
) gives
a higher value of resolving parameter corresponding to
less overlap between the two histograms shown at the
bottom plot of Fig. 4, (2) when tighter cut is applied,
the composition resolving power using the muon bundle
energy loss can be close to that obtained by the number
of muons in the bundle in the simulation, (3) using the
same cut, the composition resolving power is slightly
better at shallower depth. It would be very interesting to
explore these features in real data analysis.
IV. SIGNATURE OF HIGH ENERGY PROMPT MUONS IN
MUON BUNDLE EVENTS
Very high-energy muons in air showers are produced
either in the decay of very short-lived particles, i.e.
charm or from the ?rst interaction whether the parent
is conventional (pion or kaon) or charm. The crossing
from conventional to prompt muon ?uxes was estimated
to happen between 40 TeV and 3 PeV [8]. Such muons
may be used to study the composition of cosmic ray
primaries, as well as heavy quark production in high
energy p-N interactions. There are several ways to
separate the prompt muons from the conventional ones in
4
X. BAI
et al.
MUON BUNDLE ENERGY LOSS ...
Slant Depth (m)
1200
1400
1600
1800
2000
2200
2400
2600
2800
Resolving Probability
0.4
0.5
0.6
0.7
0.8
0.9
1
P, by N of muon
P, by E
tot.loss
, cut_1
P, by E
tot.loss
, cut_2
P, by E
tot.loss
, no cut
Slant Depth (m)
1200
1400
1600
1800
2000
2200
2400
2600
2800
Resolving Probability
0.4
0.5
0.6
0.7
0.8
0.9
1
Fe, by N of muon
Fe, by E
tot.loss
, cut_1
Fe, by E
tot.loss
, cut_2
Fe, by E
tot.loss
, no cut
Fig. 5. The values of composition resolving parameters for proton
and iron under different cuts at different slant depths. The calculation
was done with 200 proton showers and 200 iron showers with 600 PeV
primary energy at zenith angle of 30 degrees. The two cuts correspond
to the cuts used in Fig. 4
underground experiments, such as using the difference
in their zenith angle distributions, the different depth
dependence at a given depth and zenith angle [9].
Another technique that is explored here relies on recog-
nizing catastrophic dE=dX signature from these leading
muons as bursts of light on an otherwise smoother light
deposition from a bundle of lower energy muons.
The probability of ?nding a certain amount of energy
loss in ?ve-meter steps from 1450 to 2450 meters under
ice is shown in Fig. 6. The chance to have an energy
loss of about 30 TeV (point A in Fig. 6) in a ?ve-meter
step is much higher for a muon with energy of 100 TeV
than conventional muon bundles from showers below
100 PeV. If one sees a burst energy loss above 160 TeV,
it is almost certain (P > 1? 10
3
versus P < 3? 10
5
)
that the event consists of a single muon with energy
above 1 PeV rather than showers below 1 EeV (point B
in Fig. 6).
Since the cosmic ray primary energy can be deter-
mined by the surface array in IceCube, this method can
be explored further with IceTop and in-ice coincidence
data.
V. SUMMARY
In this work, we studied the muon bundle energy
loss in ice and its association with cosmic ray primary
composition. The analytic formula of the mean muon
log
10
(E
loss
/(GeV*5m)
234567
Probability
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
1
muon bundle in H showers
muon bundle in Fe showers
muon, fixed energy
AB
Fig. 6. The probability of the energy loss (in a ?ve-meter step) of
muon bundles in air showers (solid lines) and muons with ?xed energy
(dashed lines). Vertical proton and iron showers with primary energy
of 500 TeV, 10 PeV, 100 PeV, 600 PeV and 1EeV are plotted together
with vertical muons with ?xed energy of 5 TeV, 100 TeV, 1 PeV, 6
PeV, and 10 PeV on the surface. The probability increases at larger
energy loss as the energy of primary or single muon goes higher. The
energy loss sample for the probability calculation is taken from the
depth of 1450. m to the bottom of the in-ice array.
bundle energy loss given here has reasonably good
agreement with the full Monte Carlo. It can be used
in muon bundle event reconstruction in IceCube. The
parameters (cosmic primary energy and mass) in the
formula can be further explored in composition study
using IceTop and in-ice coincidence data [10]. Using
IceTop in-ice coincidence data, one can also look for
signatures from very high energy muons from charm
decay by recognizing large catastrophic dE=dX along
the muon bundle track.
Acknowledgment
This work is supported in part by
the NSF Of?ce of Polar Programs and by NSF grant
ANT-0602679.
REFERENCES
[1] T.K. Gaisser for the IceCube Collaboration, Proc. 30th ICRC,
Merida (Mexico, 2007), arXiv:0711.0353, p. 15-18.
[2] T. Gaisser,
Cosmic Rays and Particles Physics
, p.208 (Cambridge
University Press, 1990).
[3] X. Bai, et al. Proc. 30th ICRC, Merida, Mexico, Vol.5, p.1209-
1212, 2007.
[4] D. Chirkin, W. Rhode, flPropagating leptons through matter with
Muon Monte Carlo (MMC)fl, arXiv:hep-ph/0407075v2, 2008.
[5] J.W. Elbert, In Proc. DUMAND Summer Workshop (ed. A.
Roberts), 1978, vol. 2, p. 101.
[6] X. Bai, flEnergy loss of muon bundles: An analytic approxima-
tionfl, IceTop internal technical report. 2007.
[7] Christopher Henrik V. Wiebusch, Ph.D. thesis, Physicalische
Institute, RWTH Aachen, 1995.
[8] Graciela Gelmini, Paolo Gondolo, and Gabriele Varieschi,
arXiv:hep-ph/0209111v1.
[9] E. V. Bugaev et al., Phys. Rev. D58(1998)054001
[10] See for example the work by Tom Feusels, Jonathan Eisch and
Chen Xu,
Reconstruction of IceCube coincident events and study
of composition-sensitive observables using both the surface and
deep detector.
These proceedings. More work on data is still
needed.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Constraints on Neutrino Interactions at energies beyond 100 PeV
with Neutrino Telescopes
Shigeru Yoshida
∗
∗
Department of Physics, Faculty of Science, Chiba University, Chiba 263-8522, Japan
Abstract
. A search for extremely high energy cos-
mic neutrinos has been carried out with the IceCube
Neutrino Observatory. These event are neutrino-
induced energetic charged leptons and their rate
depends on the neutrino-nucleon cross-sections. The
resultant event rate has implications for possible new
physics beyond the standard model as it is predicted
that the cross-sections can be much higher than
the standard particle physics prediction if we live
in more than four space-time dimensions. In this
study we show the capability of neutrino telescopes
such as IceCube to constrain neutrino cross-sections
at energies beyond
10
7
GeV. The constraints are
obtained as a function of the extraterrestrial neutrino
flux in the relevant energy range, which accounts for
the astrophysical uncertainty of neutrino production
models.
Keywords
: Neutrino, IceCube, cross-sections
I. INTRODUCTION
High energy cosmic neutrino observations provide a
rare opportunity to explore the neutrino-nucleon (
ν
N)
interaction behavior beyond energies accessible by the
present accelerators. These neutrinos interact druing
their propagation in the earth and produce energetic
muons and taus. These secondary leptons reach un-
derground neutrino detectors and leave the detectable
signals. The detection rate is, therefore, sensitive to
neutrino-nucleon interaction probability. The center-of-
mass energy of the collision,
√
s
, is well above
∼ 10
TeV for cosmic neutrino energies on the order of 1
EeV (=
10
9
GeV), a representative energy range for
the bulk of the GZK cosmogenic neutrinos, generated
by the interactions between the highest energy cosmic
ray nucleons and the cosmic microwave background
photons [1].
The
ν
N collision cross-sections can be varied largely
if non-standard particle physics beyond the Standard
M
√
odel (SM) are considered in the high energy regime of
s ≫
TeV. The extra-dimension scenarios, for example,
have predicted such effects [2]. The cross-sections of
black hole productions via
ν
N collisions can be larger
than the SM prediction by more than two orders of
magnitude [3]. The effect would be sizable enough to
affect the expected annual event rate (
O(0.1 1))
of the
GZK neutrinos by
∼
km
3
instrumentation volume of
the underground neutrino telescope such as the IceCube
observatory, thereby the search for the extremely-high
energy (EHE) cosmic neutrinos leads to constraints on
the non-standard particle physics.
The IceCube neutrino observatory has already begun
EHE neutrino hunting with the partially deployed under-
ground optical sensor array. The 2007 partial IceCube
detector realized a
∼ 0.5
km
2
effective area for muons
with
10
9
GeV and recently placed a limit on the flux
of EHE neutrinos at the level approximately an order of
magnitude higher than the expected GZK cosmogenic
neutrino intensities for 242 days of observation [4].
Since new particle physics may vary the cross section by
more than an order of magnitude as we noted above, this
result should already imply a meaningful bound on the
ν
N cross-sections. In this paper, we study the constraint
on the
ν
N cross-sections (
σ
ν N
) by the null detection of
EHE neutrinos with the 2007 IceCube observation. A
model-independent bound is derived by estimating the
lepton intensity at the IceCube depth with the SM cross-
sections scaled by a constant. The constraint is displayed
in form of the excluded region on the plane of the cosmic
neutrino flux and
σ
ν N
. It is equivalent to upper-bound
of
σ
ν N
for a given flux of astrophysical EHE neutrinos.
II. THE METHOD
The neutrino and charged lepton fluxes at the IceCube
depth originated in a given neutrino flux at are calculated
by the coupled transportation equations:
dJ
ν
dX
=
N
A
σ
νN,CC +NC
J
ν
+
m
l
cρτ
d
l
Z
dE
l
1
E
l
dn
d
l
dE
ν
J
l
(E
l
)
+N
A
Z
dE
′
ν
dσ
ν N,N C
dE
ν
J
ν
(E
′
ν
)
+N
A
Z
dE
′
l
dσ
lN,C C
dE
ν
J
l
(E
′
l
)
(1)
dJ
l
dX
=
N
A
σ
lN
J
l
m
l
cρτ
d
l
E
l
J
l
+N
A
Z
dE
′
ν
dσ
ν N,C C
dE
l
J
ν
(E
′
ν
)
+N
A
Z
dE
′
l
dσ
lN
dE
l
J
l
(E
′
l
)
+
m
l
cρτ
d
l
Z
dE
′
l
1
E
′
l
dn
d
l
dE
l
J
l
(E
′
l
),
(2)
where
J
l
= dN
l
/dE
l
and
J
ν
= dN
ν
/dE
ν
are differen-
tial fluxes of charged leptons and neutrinos, respectively.
X
is the column depth,
N
A
is the Avogadro’s number,
ρ
is the local density of the medium (rock/ice) in the
propagation path,
σ
is the relevant interaction cross-
sections,
dn
d
l
/dE
is the energy distribution of the decay
products which is derived from the decay rate per unit
energy,
c
is the speed of light,
m
l
and
τ
d
l
are the mass
2
S. YOSHIDA, CONSTRAINTS ON UHE
ν
INTERACTIONS
GZK
ν
E > 10PeV
x1
x3
x10
cos (Zenith Angle)
−18−2
sec
Flux [10 cm
sr]
−1
−1
−1
−0.5
0
0.5
1
0
1
2
3
4
Fig. 1.
Integral fluxes of the muon and taus above 10 PeV
(=
10
7
GeV) at IceCube depth (∼ 1450m) of the GZK cosmogenic
neutrinos [5]. The solid lines represent the muons while the dashed
lines represent the taus. Numbers along each of the curves are the
multiplication factors (N
scale
) that enhance the standard
νN
cross-
sections [6] in the relevant calculations.
and the decay life time of the lepton
l
, respectively. In
this paper we scale
σ
ν N
to that of the SM prediction
with the factor N
scale
,
i.e.
,
σ
ν N
≡ N
scale
σ
SM
ν N
. It is an
extremely intensive computational task to resolve the
coupled questions above for every possible values of
σ
ν N
. To avoid this difficulty, we introduce two assump-
tions to decouple calculation of
J
ν
from the charged
lepton transportation equation. The first is that distortion
of the neutrino spectrum by the neutral current reaction
is small and the other is that the contribution of muon
and tau decay to enhance the neutrino flux, which is
represented by the second term on the right hand side of
Eq. 1, is negligible. These are very good approximation
in the energy region above
10
8
GeV where even the
tau is unlikely to decay before reaching the IceCube
instrumentation volume. Then the neutrino flux is simply
given by the beam dumping factor as
J
ν
(E
ν
, X
IC
) = J
ν
(E
ν
, 0)e
N
scale
σ
SM,CC
ν N
X
IC
,
(3)
where
X
IC
is column density of the propagation path
from the earth surface to the IceCube depth. The charged
lepton fluxes,
J
l=µ,τ
(E
l
, X
IC
)
, are obtained as
J
µ,τ
(E
µ,τ
, X
IC
) = N
A
Z
X
IC
0
dX
Z
dE
′
µ,τ
dN
µ,τ
dE
µ,τ
(E
′
µ,τ
→ E
µ,τ
)
Z
dE
ν
N
scale
dσ
S M,C C
νN
dE
′
µ,τ
J
ν
(E
ν
, 0)e
N
scale
σ
SM,C C
ν N
X
.
(4)
Here
dN
µ,τ
/dE
µ,τ
(E
′
µ,τ
→ E
µ,τ
)
represents distribu-
tions of muons and taus with energy of
E
µ,τ
at
X
IC
originated in those with energy of
E
′
µ,τ
produced by
νN
collisions at depth
X
. This is calculated in the
transportation equation, Eq. 2, with a replacement of
J
ν
(E
′
ν
)
by Eq. 3.
Calculation of the neutrino and the charged lepton
fluxes with this method is feasible for a wide range of
N
scale
without any intensive computation. A comparison
of the calculated fluxes with those obtained without the
introduced simplification for a limited range of
N
scale
indicates that the relative difference we found in the
resultant
J
ν,µ,τ
(X
IC
)
is within 40%. Since this analysis
is searching for at least an
order
of magnitude difference
in
σ
ν N
, the introdced simplifications provide sufficient
accuracy for the present study.
Fig. 1 shows the calculated intensities of the sec-
ondary muons and taus for various
N
scale
factors. One
can see that the intensity is almost proportional to
N
scale
as expected since the interaction probability to generate
muons and taus linearly depends on
σ
ν N
. It should
be pointed out, however, that the dependence starts to
deviate from the complete linearity when the propagation
distance is comparable to the mean free path of neutri-
nos, as one can find in the case of
N
scale
= 10
in the
figure. This is because the neutrino beam dumping factor
in Eq. 3 becomes significant under this circumstances.
The flux yield of leptons,
Y
l
ν
(
l = ν
′
s,
µ, τ
) from
neutrinos with a monochromatic energy at earth surface,
E
s
ν
, is given by Eq. 4 with an insertion of
J
ν
(E
ν
, 0) =
δ(E
ν
E
s
ν
)
. The resultant event rate per
neutrino energy
decade
is then obtained by,
N
ν
(E
s
ν
) =
X
ν =ν
e
,ν
µ
,ν
τ
1
3
dJ
ν
e
+ν
µ
+ν
τ
d log E
ν
(E
s
ν
)
Z
dΩ
X
l=ν,µ,τ
Z
dE
l
A
l
(E
l
) Y
l
ν
(E
s
ν
, E
l
, X
IC
(Ω), N
scale
)
(5)
where
A
l
is the effective area of the IceCube to detect
the lepton
l
. Note that the differential limit of the
neutrino flux is given by Eq. 5 for
N
scale
= 1
with
N
ν
= µ
¯
90
which corresponds to the 90 % confidence
level average upper limit. This calculation is valid when
the cosmic neutrino flux
J
ν
and the cross section
σ
ν N
do not rapidly change over a decade of neutrino energy
around
E
s
ν
. Limiting
σ
ν N
in the present analysis corre-
sponds to an extraction of the relation between
N
scale
and the (unknown) cosmic neutrino flux
J
ν
e
+ν
µ
+ν
τ
yielding
N
ν
= µ
¯
90
. The obtained constraints on
σ
ν N
is represented as a function of
J
ν
e
+ν
µ
+ν
τ
for a given
energy of
E
s
ν
. It consequently accounts for astrophysical
uncertainties on the cosmic neutrino flux.
In scenarios with extra dimensions and strong grav-
ity, Kaluza-Klein gravitons can change only the neu-
tral current (NC) cross-sections because gravitons are
electrically neutral. Any scenarios belonging to this
category can be investigated by scaling only
σ
NC
ν N
in
the present analysis. The event rate calculation by
Eq. 5 is then performed for
Y
l
ν
(N
scale
= 1)
with ef-
fective area for
ν
’s,
A
ν
, enhanced by
(σ
S M,C C
ν N
+
N
scale
σ
S M,N C
ν N
)/(σ
S M,C C
ν N
+ σ
SM,NC
ν N
)
since the rate of
detectable events via the NC reaction by IceCube is
proportional to
σ
N C
ν N
. We also show the constraint in
this case.
III. RESULTS
In this analysis we use the IceCube observation results
with 242 days data in 2007 to limit
σ
ν N
using Eq. 5. No
detection of signal candidates in the measurement has
led to an upper limit of the neutrino flux at
6 × 10
7
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
]
sec
j(E) E[GeV cm
−1
sr
−2
−1
2
σ
ν
N
[cm
2
]
1EeV
10EeV
10
−33
10
−32
10
−31
10
−30
10
−29
10
−9
10
−8
10
−7
10
−6
Fig. 2.
Constraints on the all-flavor cosmic neutrino flux and the
charged current
νN
cross-sections based on the null detection of
neutrino signals by the IceCube 2007 observation. The right upper
region is excluded by the present analysis. The cross points provide
reference points where the standard cross section [6] and the expected
GZK cosmogenic neutrino fluxes [7] is located.
GeV cm
2
sec
1
sr
1
[4]. The effective area
A
l
is 0.5
km
2
for
µ
, 0.3 km
2
for
τ
, and
3 × 10
4
km
2
for
ν
′
s.
Constraints on
σ
ν N
are then derived with Eq. 5. Here
we assume that the effective area for
ν
′
s is proportional
to
N
scale
. The results for
E
s
ν
= 10
9
and
10
10
GeV are
shown in Fig. 2. Enhancing the charged current cross-
sections by more than a factor of 30 for
E
ν
= 1
EeV
(10
9
GeV) is disfavored if the astrophysical neutrino
intensities are around
∼ 10
7
GeV cm
2
sec
1
sr
1
,
the expected range of the GZK cosmogenic neutrino
bulk. Note that neutrino-nucleon collision with
E
ν
= 1
EeV corresponds to
√
s ∼ 40
TeV and the present
limit on
σ
ν N
would place a rather strong constraint
on scenarios with extra dimensions and strong grav-
ity, although more accurate estimation requires studies
with a model-dependent approach which implements
the cross-sections and the final-state particles from the
collision predicted by a given particle physics model in
the neutrino propagation calculation. Taking into account
uncertainty on the astrophysical neutrino fluxes, any
model that increases the neutrino-nucleon cross-section
to produce charged leptons by more than two orders
of magnitude at
√
s ∼ 40
TeV is disfavored by the
IceCube observation. However, we should point out that
the IceCube 2007 data could not constrain the charged
current cross-sections if the intensity of cosmic neutrinos
in the relevant energy region is fewer than
∼ 10
8
GeV
cm
2
sec
1
sr
1
, approximately an order of magnitude
lower than the predicted cosmogenic neutrino fluxes
discussed in the literature. Absorption effects in the earth
becomes sizable in this case, resulting in less sensitivity
to the cross-section. This limitation will be improved for
larger detection area of the full IceCube detector.
Fig. 3 shows the constraint when only the NC cross
section is varied. Enhancement of
σ
N C
ν N
by a factor
beyond 100 at
√
s ∼ 40
TeV is disfavored, but this
strongly depends on the cosmic neutrino flux one as-
]
sec
j(E) E[GeV cm
−1
sr
−2
−1
2
σ
ν
N
[cm
2
]
1EeV
10EeV
10
−33
10
−32
10
−31
10
−30
10
−29
10
−9
10
−8
10
−7
10
−6
Fig. 3.
Constraints on the all-flavor cosmic neutrino flux and the
neutral current
νN
cross-sections for the scenario that only the neutral
current reaction is enhanced by a new physics beyond the standard
model. The right upper region is excluded by the present analysis.
The crosses provide reference points for the standard cross section [6]
and the expected GZK cosmogenic neutrino fluxes [7] is located.
sumes. Because the NC interaction does not absorb
neutrinos during their propagation though the earth, even
the case when the neutrino flux is small could bound
the cross-section, but the limit becomes rather weak;
the allowed maximum enhancement factor is an order
of
∼ 10
3
.
IV. SUMMARY AND OUTLOOK
The IceCube 2007 observation indicated that any
scenario to enhance either the NC or both NC and CC
equivalent cross-sections by more than 100 at
√
s ∼ 40
TeV is unlikely if the astrophysical neutrino fluxes are
∼ 10
7
GeV cm
2
sec
1
sr
1
in EeV region. A study
of constraints on the model-dependent cross-sections
predicted by the theories of the black holes creation with
extra dimensions is underway with a dedicated treatment
of final state particles produced from the black hole
evaporation.
ACKNOWLEDGMENTS
This work was supported in part by the Grants-in-
Aid in Scientific Research from the MEXT (Ministry of
Education, Culture, Sports, Science, and Technology) in
Japan.
REFERENCES
[1] V. S. Beresinsky, G. T. Zatsepin, Phys. Lett.
28B
, 423 (1969).
[2] C. Tyler, A. V. Olinto, G. Sigl, Phys. Rev.
D63
05501 (2001).
[3] J. Alvarez-Mun˜iz
et. al.
. Phys. Rev.
D65
124015 (2002).
[4] K. Mase, A. Ishihara, S. Yoshida for the IceCube collaboration,
this proceedings
.
[5] S. Yoshida, H. Dai, C.C.H. Jui, P. Sommers, Astrophys. J.
479
,
547 (1997).
[6] R. Gandhi, C. Quigg, M. H. Reno, and I. Sarcevic, Astropart.
Phys.
5
, 81 (1996); Phys. Rev. D
58
093009 (1998).
[7] O. E. Kalashev
et.al.
, Phys. Rev. D
66
, 063004 (2002).
.
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
Constraints on Extragalactic Point Source Flux from Diffuse
Neutrino Limits
Andrea Silvestri
∗
and Steven W. Barwick
∗
∗
Department of Physics and Astronomy, University of California, Irvine, CA 92697, USA.
Abstract. We constrain the maximum flux
from extragalactic neutrino point sources by
using diffuse neutrino flux limits. We show
that the maximum flux from extragalactic
point
sources
is
E
2
(dN
ν
/dE) ≤ 5.1 × 10
−9
(L
ν
/10
45
erg/s)
1/3
GeV cm
−2
s
−1
from
an
ensemble of sources with average neutrino
luminosity per decade, L
ν
. It depends only slightly
on factors such as the inhomogeneous matter density
distribution in the local universe, the luminosity
distribution, and the assumed spectral index.
Keywords: Extragalactic sources, diffuse and point
sources, high energy neutrinos
I. INTRODUCTION
The origin of ultra high energy cosmic rays (UHECR),
is still unknown. AGN, GRB’s, or processes beyond the
standard model have been hypothesized to be the sources
of UHECR’s, and may originate from regions of the sky
correlated with AGN sources [1]. Therefore, if nearby
AGN are the sources of the highest energy CR’s, and if
AGN emit ν’s in addition to photons, protons and other
charged particles, then the fluxes from individual AGN
may be observable by current generation of neutrino
detectors. Several models predict a diffuse neutrino flux
from AGN, in particular ν-production has been predicted
from the core of radio-quite AGN as presented in [2],
[3], and from AGN jets and radio lobes as suggested
in [4], [5], [6]. There are good but speculative reasons to
expect a correlation between sources of cosmic rays and
sources of neutrinos. Direct searches for diffuse [7] and
point flux [8] by current telescopes have set the most
stringent upper limits, but generally have not reached
the sensitivity required, and the models suggest that
challenges exist even for next generation telescopes.
Of course, one of the primary motivations for the
construction of ν-telescopes is to search for unexpected
sources with no obvious connection to the power emitted
in the electromagnetic (EM) band. However, we show
in this paper that the ν-flux from EG point sources can
be constrained by the measured diffuse ν-flux limits.
We also test models from individual sources with the
constraints.
II. ANALYSIS
If the diffuse ν-flux is generated by an ensemble
of extragalactic (EG) sources, then only the nearest of
the diffusely distributed sources would be detectable
as point sources. Point sources of neutrinos are ob-
served when several neutrinos originate from the same
direction, and in the context of this study, only the
very nearest of the uniformly distributed sources are
detectable as point sources. The number of detectable
(or resolvable) point sources, N
s
, first proposed in [9],
is determined for a given diffuse ν-flux limit and point
source sensitivity. The N
s
calculation is based on three
general assumptions: (1) the sources are extragalactic
and uniformly distributed in space; (2) the ν-luminosity
follows a power law or broken power law distribution;
(3) the sources are assumed to emit neutrinos with an
E
−2
energy spectrum. Later, we discuss the robustness
of the constraint by investigating the validity and caveats
of the assumptions.
The number of resolvable sources N
s
for a distribution
of luminosities L
ν
per decade in energy is given by:
N
s
?
√
4π
3
1
r
ln
³
E
max
E
min
´
H
0
c
K
diff
ν
(C
point
)
3/2
?L
3/2
ν
?
?L
ν
?
1
ξ
(1)
where the parameter ξ depends on cosmology and source
evolution as described in [9]. The ν-luminosity of the
source, L
ν
has units of (erg/s), and (E
min
, E
max
)
defines the energy range of the flux sensitivity, where
E
max
= 10
3
E
min
for a typical experimental con-
dition. For canonical energy spectrum proportional
to E
−2
, we use the results for all-flavor diffuse
flux limits presented in [7] to obtain the ν
µ
-diffuse
flux: K
diff
ν
≡ E
2
Φ
ν
µ
= (1/3) ∗ E
2
Φ
ν
all
=
(1/3) ∗ 8.4 × 10
−8
GeV cm
−2
s
−1
sr
−1
= 2.8 ×
10
−8
GeV cm
−2
s
−1
sr
−1
valid for the energy interval
of 1.6 PeV < E < 6.3 EeV. This is the energy interval of
interest for CR interaction with energies above the ankle.
Below PeV energies K
diff
ν
can be obtained from [10],
K
diff
ν
< 7.4×10
−8
GeV cm
−2
s
−1
sr
−1
, valid between
16 TeV to 2.5 PeV. So, similar diffuse flux limits,
K
diff
ν
, exist for the entire interval from TeV to EeV
energies. C
point
is the experimental sensitivity to ν-
fluxes from point sources for an E
−2
spectrum, and we
used C
point
= E
2
(dN
ν
/dE) < 2.5 × 10
−8
GeV cm
−2
s
−1
[8].
The diffuse flux K
diff
ν
and the point flux sensitivity
C
point
are linearly correlated by the following equation:
4πK
diff
ν
=
.
3
μ
c
H
0
¶
1
r
max
N
s
¸
× C
point
(2)
where (c/H
0
) represents the Hubble distance given by
c/H
0
= 3 × 10
5
(km s
−1
)/77 (km s
−1
Mpc
−1
) ∼ 4
2
A. SILVESTRI AND S.W. BARWICK. FLUX CONSTRAINTS
(GeV))
ν
μ
10
(E
log
3
4
5
6
7
8
9
10
11
)
-1
s
-2
dN/dE (GeV cm
2
E
10
-10
10
-9
10
-8
10
-7
UHE constraint (this work)
45
erg/s)
ν
= 10
VHE constraint (L
45
erg/s)
ν
= 10
(L
40
erg/s)
ν
= 10
constraint (L
40
erg/s)
ν
= 10
constraint (L
AMANDA [8]
IceCube 1-yr sensitivity [28]
3C273 (SP92b)
3C273 (N93)
3C273 (M93)
3C279 (AD04)
NGC4151 (SP92)
NGC4151 (SS96)
Mkn 421 (SP92a)
Mkn 501 (LM00)
RQQ (AM04)
GeV blazar (NS02)
Cen A (AN04)
Cen A (HO07)
Cen A (CH08)
M87 (AN04)
Mkn 501 (MP01)
3C273 (SS96)
3C279 (SP92c)
Coma (CB98)
3C273 (SP92b)
3C273 (N93)
3C273 (M93)
3C279 (AD04)
NGC4151 (SP92)
NGC4151 (SS96)
Mkn 421 (SP92a)
Mkn 501 (LM00)
RQQ (AM04)
GeV blazar (NS02)
Cen A (AN04)
Cen A (HO07)
Cen A (CH08)
M87 (AN04)
Mkn 501 (MP01)
3C273 (SS96)
3C279 (SP92c)
Coma (CB98)
Fig. 1. Constraints on neutrino point fluxes derived from the UHE diffuse ν-flux limit [7], and from VHE limit [10], and assuming a range
of neutrino luminosities L
ν
= (10
40
− 10
45
) erg/s. Current AMANDA limit [8] and IceCube sensitivity [28] to ν-point fluxes are also shown
(thin solid lines). Model predictions for ν
µ
-point flux from EG sources are displayed in thin dotted-dashed lines: emission from 3C273 predicted
by [3C273 (SP 92)] [13], core emission due to pp interactions [3C273 (N93)] [14], including pp and pγ interactions [3C273 (M93)] [15];
core emission due to pγ interaction [3C273 (SS96)] [24]; AGN jet, calculated for a 3C279 flare of 1 day period [3C279 (AD04)] [16] and
continuous emission [3C279 (SP 92)] [13]; emission from NGC4151 by [NGC4151 (SP 92)] [13] and core emission from NGC4151 due
to pγ interaction [NGC4151 (SS96)] [2]; Spectra predicted for Mkn 421 [Mkn 421 (SP 92)] [13], and for Mkn 501 during the outburst
in 1997 [Mkn 501 (LM00)] [17] and blazar flaring Mkn 501 [Mkn 501 (MP 01)] [23]; radio-quiet AGN [RQQ (AM04)] [18] and
GeV-loud blazars [GeV blazar (NS02)] [19]; emission from Cen A as described in [Cen A (AN04)] [20], [Cen A (HO07)] [21] and
[Cen A (CH08)] [22]; emission from M87 [M87 (AN04)] [20], and emission from Coma galaxy cluster [Coma (CB98)] [25].
Gpc, and r
max
defines the maximum observable distance
for a point source of luminosity L
ν
, which is given by:
r
max
=
.
L
ν
4π ln(E
max
/E
min
) C
point
¸
1/2
(3)
The constraint also holds for time variable sources,
since it depends only on the observed luminosity and
is independent of the duration of the variability [11].
Similarly, it holds for beamed sources, such as GRB’s.
However for luminosities of the order of 10
51
erg/s
typical of GRB emission, we found that a dedicated
search for GRB’s leads to more restrictive limits [12].
III. RESULTS
We can now estimate a numerical value for N
s
by
incorporating the ν-diffuse flux limit and the sensitivity
to point sources in Eq. 1: N
s
? (3.7 · 10
−29
cm
−1
) ×
(K
diff
ν
) × (C
point
)
−3/2
× (L
45
)
1/2
× 1/ξ ? 0.07
computed assuming L
45
= 10
45
erg/s, and ξ = ξ
AGN
?
2.2 which defines the effects due to cosmology and
source evolution that follows AGN [9]. The estimate for
N
s
? 0.07, which is compatible with the non-detection
of any point sources.
The constraint on ν-flux is determined by setting
N
s
= 1 and inverting Eq. 1 to solve for C
point
:
E
2
dN
ν
dE
≤
2
66
4
√
4π
3
1
r
ln
³
E
max
E
min
´
H
0
c
· K
diff
ν
p
L
ν
·
1
ξ
3
77
5
2/3
E
2
dN
ν
dE
≤ 5.1 × 10
−9
μ
L
ν
10
45
erg/s
¶
1/3
μ
GeV
cm
2
s
¶
(4)
valid for the same energy range 1.6 PeV < E < 6.3
EeV of the diffuse flux limit K
diff
ν
. This result defines a
benchmark flux constraint Φ
C
≡ E
2
(dN
ν
/dE) ≤ 5.1 ×
10
−9
GeV cm
−2
s
−1
on neutrino fluxes from bright
(L
ν
= 10
45
erg/s) extragalactic point sources, which
is a factor five lower than present experimental limits
from direct searches. Note from Eq. 2, that for the case
of N
s
< 1 the distance ratio (c/H
0
)/r
max
> 1, which
occurs for sources well within the Hubble distance.
Fig. 1 shows these results represented by the con-
straint derived from the Ultra High Energy (UHE) dif-
fuse ν-flux limit for energies above PeV (thick solid
line), and from the Very High Energy (VHE) limit in
the TeV-PeV range (thick dotted line). Model predictions
for ν
µ
-point flux from EG sources (dotted/dashed lines),
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
TABLE I
SUMMARY OF MODELS FOR ν
µ
POINT FLUX FROM EXTRAGALACTIC SOURCES CONSTRAINED BY THE RESULTS FROM THIS WORK.
MODELS ARE ORDERED ACCORDING TO THE TYPE OF THE NEUTRINO SOURCE. THE PARAMETER BAND
γ
REPRESENTS THE PHOTON
ENERGY-BAND ASSUMED IN THE GIVEN MODEL. THE NEUTRINO FLUX PREDICTED BY A GIVEN MODEL FOR AN E
−2
SPECTRUM IS
DENOTED BY Φ
model
ν
AND NEUTRINO FLUXES FOR MODELS WHICH ARE ALMOST CONSTANT TO AN E
−2
SPECTRUM FOR A LARGE
ENERGY RANGE. THE CORRESPONDING FLUX CONSTRAINED FOR AN E
−2
SPECTRUM IS DEFINED BY THE BENCHMARK FLUX Φ
C
. IF THE
SOURCE IS COMMONLY REPLICATED IN THE UNIVERSE WITH OUR ASSUMPTIONS, THEN THE RATIO R
flux
= Φ
C
/Φ
model
ν
< 1
DETERMINES A MODEL CONSTRAINED BY THIS WORK.
Model
Band
γ
Φ
model
ν
R
flux
Reference
(GeV/cm
2
s)
[3C273 (SP 92)]
IR/x-ray
1.0 × 10
−8
0.51
[13]
[3C273 (N93)]
x-ray
2.5 × 10
−8
0.20
[14]
[3C273 (M93)]
γ-ray/IR
1.0 × 10
−8
0.51
[15]
[3C279 (AD04)]
GeV
2.0 × 10
−7
0.03
[16]
[NGC4151 (SP 92)]
IR/x-ray
3.5 × 10
−8
0.14
[13]
[Mkn 421 (SP 92)]
IR/x-ray
9.0 × 10
−9
0.10
[13]
[Mkn 501 (LM00)]
TeV
2.5 × 10
−8
0.57
[17]
[RQQ (AM04)]
x-ray/UV
1.0 × 10
−8
0.51
[18]
[Cen A (AN04)]
TeV
1.5 × 10
−8
0.34
[20]
[Cen A (CH08)]
TeV
6.0 × 10
−9
0.85
[22]
have been tested by this analysis and are summarized in
Tab. I.
Tab. I summarizes the results from the constraint Φ
C
compared to a number of models of neutrino point fluxes
from extragalactic sources. The fluxes Φ
model
ν
predicted
from these models can be directly compared to Φ
C
since either follow an E
−2
spectrum, or do cover a
large energy range almost constant to an E
−2
spec-
trum. These models are constrained since their predicted
fluxes exceed the benchmark flux set by Φ
C
. If the
source is commonly replicated in the universe with the
assumptions defined in Sec II, then the ratio R
flux
=
Φ
C
/Φ
model
ν
< 1 determines a model constrained by this
analysis.
Models have also been presented which predict ν-
fluxes from nearby AGNs [20], [21], [22], such as
Centaurus A (Cen A) and M87 at a distance of 3.4 Mpc
and 16 Mpc, respectively. We note these predictions lie
below the upper value of the constraint Φ
C
, and are
compatible with our results.
A few other models, as shown in Fig. 1, present
flux predictions which strongly deviate from an E
−2
spectrum and in this class of models a direct comparison
with the benchmark flux Φ
C
is less straightforward.
In these cases, the predicted energy spectra should
be integrated over the energy interval that defines the
constraint to obtain the total neutrino event rate, N
model
ν
.
This result should be compared to the integrated neutrino
event rate N
C
determined by the constraint and by the
given neutrino detector characteristics.
IV. DISCUSSION
The thick dark horizontal line in Fig. 1 indicates
our primary constraint Φ
C
. It was derived for a mean
neutrino luminosity that characterizes the brightest AGN
in the EM band. The constraint is even stronger for
less luminous classes of sources. In this section we
address the robustness of the constraint by focusing the
discussion on the three assumptions listed in Sec. II.
A. Homogeneity of source distribution
The matter distribution within 50 Mpc of the Milky
Way is far from uniform, which suggests the possibility
that the number density of neutrino sources, n
s
, may
be higher than the universal average if n
s
is correlated
with matter density. We argue that, in practice, the
local inhomogeneity affects only the class of sources
characterised by low luminosities. The bright sources are
too rare to be affected by local matter density variation
- the likelihood of finding a bright neutrino source
within 50 Mpc is small to begin with (if EM luminosity
and neutrino luminosity are comparable), and the local
enhancements in matter density insufficient to change
the probability of detection.
On the other hand, low luminosity sources are more
likely to be within 50 Mpc, and their density could be
affected by fluctuations (e.g. by a factor of 15 [26] at 5
Mpc) in the local matter density. In this case, the flux
constraint (Eq. 4) should be adjusted to account for the
higher density of local matter, Φ
?
= Φ∗(n
local
/?n
s
?)
2/3
(Tab II). However, the adjusted fluxes are below the
benchmark flux constraint Φ
C
.
To exceed Φ
C
a source of a given luminosity L
ν
must
be within a distance d
l
= (4π/3)
1/3
·r
max
∗(Φ
?
/Φ
C
)
1/2
.
However, we found no counterparts in the EM band
within a distance d
l
that would violate the constraint
Φ
C
.
TABLE II
ADJUSTED Φ
?
TO ACCOUNT FOR LOCAL n
s
ENHANCEMENT.
L
ν
Φ
n
l
/?n
s
?
Φ
?
d
l
erg/s
GeV/2s
[26]
GeV/cm2s
Mpc
8 × 10
41
0.5 × 10
−9
15
2.8 × 10
−9
3.7
1 × 10
43
1.1 × 10
−9
5
3.1 × 10
−9
16
1 × 10
44
2.4 × 10
−9
2.5
4.3 × 10
−9
55
B. Distribution function of ν-luminosity
The number of detectable sources N
s
depends on
?L
3/2
ν
?/?L
ν
?, but the luminosity distribution for neu-
trino sources is not known. However, if the distribution
4
A. SILVESTRI AND S.W. BARWICK. FLUX CONSTRAINTS
function follows a broken power law, which is measured
for several class of energetic sources in various electro-
magnetic bands, then the estimate for N
s
based on a
full distribution agrees with an estimate using the mean
luminosity of the distribution to within few percent, as
shown in [11]. So, to an excellent approximation, the
mean value of the luminosity distribution is sufficient
to predict N
s
∼ ?L
ν
?
1/2
for power law or broken
power law distributions. The most common distribution
of luminosities can only be observed at relatively small
distances, so source evolution and cosmological effects
are negligible. Larger values of luminosity are too rare
to contribute significantly.
C. Energy spectrum of the source
The constraint can be extended to energy spectra
that differ from the assumed E
−2
dependence, but the
constraint applies over a restricted energy interval that
matches the energy interval of the diffuse neutrino limits.
Experimental diffuse limits span two different energy
regions, VHE and UHE, and either limit can be inserted
into Eq. 4. The restriction in energy range is required to
avoid extrapolating the energy spectrum to unphysical
values. In other words, for power law indices far from
2, the spectrum must cut-off at high energies for indices
γ < 2, or at low energies for indices γ > 2. Subject
to this restriction, we find that the constraint depends
weakly on the assumed spectral index. For example,
the constraints improve by a factor 2 for hard spectra
(γ = 1) and weaken by roughly the same factor for soft
spectra (γ = 3) [11].
On the other hand, it could be argued that the energy
spectrum dN
ν
/dE is completely unknown. In this case,
instead of relying on the power law of neutrino luminos-
ity L
ν
, one could derive the constraints by examining the
measured number density n
s
, (n
s
∝ 1/L
ν
) for a given
class of sources [27].
V. CONCLUSION
The constraint on neutrino fluxes from extra-
galactic point sources is E
2
(dN
ν
/dE) ≤ 5.1 ×
10
−9
(L
ν
/10
45
erg/s)
1/3
GeV cm
−2
s
−1
, which is a
factor 5 below current experimental limits from direct
searches if the average L
ν
distribution is comparable to
the EM luminosity that characterizes the brightest AGN.
We tested a number of model predictions for ν-point
fluxes, and models which predict fluxes higher than the
constraint have been restricted by this analysis.
The constraint is strengthened for less luminous sources,
and noncompetitive with direct searches for highly lu-
minous explosive sources, such as GRB. We found
that the constraint is robust when accounting for the
non-uniform distribution of matter that surrounds our
galaxy, or considering energy spectra that deviate from
E
−2
, or various models of cosmological evolution. The
constraint suggests that the observation of EG neutrino
sources will be a challenge for kilometer scale detectors
unless the source is much closer than the characteristic
distance between sources, d
l
, after accounting for local
enhancement of the matter density. Although the con-
straint cannot rule out the existence of a unique, nearby
EG neutrino sources, we note that we found no counter-
parts in the EM band with the required luminosity and
distance to violate the constraint, assuming L
ν
∼ L
γ
.
VI. ACKNOWLEDGMENT
The authors acknowledge support from U.S. National
Science Foundation-Physics Division, and the NSF-
supported TeraGrid system at the San Diego Supercom-
puter Center (SDSC).
REFERENCES
[1] J. Abraham et al., Science 318, 938 (2007).
[2] F. Stecker et al., Phys. Rev. Lett. 66, 2697 (1991); erratum Phys.
Rev. Lett. 69, 2738 (1992).
[3] F. Stecker, Phys. Rev. D 72, 107301 (2005).
[4] K. Mannheim, Astropart. Phys. 3, 295 (1995).
[5] F. Halzen and E. Zas, Astrophys. J. 488, 669 (1997).
[6] R. J. Protheroe, astro-ph/9607165 (1996).
[7] A. Silvestri et al., Proc. 31th ICRC, these Proceedings (2009).
[8] R. Abbasi et al., Phys. Rev. D 79, 062001 (2009).
[9] P. Lipari, Nucl. Instrum. Meth. A 567, 405 (2006).
[10] A. Achterberg et al., Phys. Rev. D 76, 042008 (2007).
[11] A. Silvestri, Ph.D. Thesis, University of California, Irvine (2008).
[12] A. Achterberg et al., Astrophys. J. 674, 357 (2008).
[13] A. P. Szabo and R. J. Protheroe, in High Energy Neutrino
Astrophysics, World Scientific (Singapore), 24 (1992).
[14] L. Nellen et al., Phys. Rev. D 47, 5270 (1993).
[15] K. Mannheim, Phys. Rev. D 48, 2408 (1993).
[16] A. Atoyan and C. Dermer, New Astron. Rev. 48, 381 (2004).
[17] J. G. Learned, and K. Mannheim, Annu. Rev. Nucl. Part. Sci. 50,
679 (2000).
[18] J. Alvarez-Muniz and P. Meszaros, Phys. Rev. D 70, 123001
(2004).
[19] A. Neronov and D. Semikoz, Phys. Rev. D 66, 123003 (2002).
[20] L. A. Anchordoqui et al., Phys. Lett. B 600, 202 (2004).
[21] F. Halzen and A. O’Murchadha, astro-ph/0802.0887 (2008).
[22] A. Cuoco and S. Hannestad, Phys. Rev. D 78, 023007 (2008).
[23] A. Mu¨cke and R. J. Protheroe, Astropart. Phys. 15, 121 (2001).
[24] F. Stecker and M. H. Salamon, Space Sci. Rev. 75, 341 (1996).
[25] S. Colafrancesco and P. Blasi, Astropart. Phys. 9, 227 (1998).
[26] T. Jarrett, private communication.
[27] M. Kowalski, private communication.
[28] J. Ahrens et al., Astropart. Phys. 20, 507 (2004).
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
1
Study of electromagnetic backgrounds in the 25-300 MHz
frequency band at the South Pole
Jan Auffenberg
¤
, Dave Besson
y
, Tom Gaisser
z
, Klaus Helbing
¤
, Timo Karg
¤
, Albrecht Karle
x
,
and Ilya Kravchenko
{
¤
Bergische Universitat¨ Wuppertal, Fachbereich C- Astroteilchenphysik, 42097 Wuppertal, Germany
y
Dept. of Physics & Astronomy, University of Kansas, Lawrence, KS 66045, USA
z
Bartol Research Institute, University of Delaware, Newark, DE 19716, USA
x
Dept. of Physics, University of Wisconsin, Madison, WI 53706, USA
{
Dept. of Physics & Astronomy, University of Nebraska, Lincoln, NE 68588, USA
Abstract. Extensive air showers are detectable
by radio signals with a radio surface detector. A
promising theory of the dominant emission process
is the coherent synchrotron radiation emitted by e+
e- shower particles in the Earth’s magnetic field
(geosynchrotron effect). A radio air shower detector
can extend IceTop, the air shower detector on top
of IceCube. This could increase the sensitivity of
IceTop to higher shower energies and for inclined
showers significantly. Muons from air showers are
a major part of the background of the neutrino
telescope IceCube. Thus a surface radio air shower
detector could act as a veto detector for this muonic
background. Initial radio background measurements
with a single antenna in 2006 revealed a continuous
electromagnetic background promising a low energy
threshold of radio air shower detector. However,
short pulsed radio interferences can mimic real sig-
nals and have to be identified in the frequency range
of interest. These properties of the electromagnetic
background are being measured at the South Pole
during the Antarctic winter 2009 with two different
types of surface antennas. In total four antennas
are placed at distances ranging up to 400m from
each other. They are read out using the RICE DAQ
with an amplitude threshold trigger and a minimum
bias trigger. Results of the first three months of
measurement are presented.
Keywords: Radio air shower detection, EMI back-
ground, South Pole
I. INTRODUCTION
The emission of coherent synchrotron radiation by
e
+
e
¡
shower particles in the Earth magnetic field
provides a measurable broadband signal from 10 MHz-
150 MHz on ground [1]. The South Pole site with
its dedicated infrastructural environment and a limited
number of radio sources is possibly one of the best
places in the world for the detection of air showers by
their low frequency radio emission. Another feature of
the South Pole site in comparison to other radio quiet
regions in the world is the possibility to make studies
in coincidence with other astrophysical experiments like
Fig. 1. Schematic view of a high energy radio air shower detector
expanding the active area of IceTop. The distance between single
antenna stations can be several hundred meters.
the neutrino detector IceCube and the air shower detector
IceTop. IceCube is a neutrino detector [2] embedded in
the Antarctic ice. One of the main aims of IceCube is
to measure neutrinos from cosmic sources. The strategy
of IceCube is to measure up-going particles from the
northern hemisphere. Only neutrinos or other weakly
interacting particles are not absorbed by the Earth and
are able to interact in the South Pole ice and produce
measurable particles like muons. IceTop is built on the
surface above IceCube (Fig. 1). It is designed to detect
cosmic air showers from 10
15
eV up to 10
18
eV.
This special environment leads to two different op-
tions with different focus for a radio air shower detector
on top of IceCube and its ambit [3].
² An infill detector built up of radio antennas on
the same footprint as IceTop in similar distances
but shifted with respect to the tank array. This
provides an additional powerful observation tech-
nique in cosmic ray research of air showers at
the South Pole. It would be possible to study air
showers by three independent detector systems,
IceTop, IceCube and the radio detector.
² An areal expansion of IceTop with radio surface
antennas is an extension of the air shower detector
2
JAN AUFFENBERG et al. ELECTROMAGNETIC BACKGROUND AT THE SOUTH POLE
to higher energy primary particles and to higher
inclination angles [6]. The idea is to build an
antenna array in rings with increasing radius around
the IceTop array. For ultra-high energetic neutrinos,
E > 300 TeV, the neutrino nucleon cross section is
large enough for absorption in the Earth to become
increasingly important. For cosmogenic neutrinos,
produced by the GZK mechanism, for example,
most of the signal comes from near the horizon.
Thus Muon bundles induced by air showers can be
misinterpreted as a neutrino signal in the IceCube
detector. The role of an antenna array field expan-
sion of the IceTop detector is to detect these air
showers with high inclination angle as a veto for
the IceCube detector.
II. ELECTROMAGNETIC BACKGROUND
MEASUREMENTS AT THE SOUTH POLE WITH RICE
Initial background measurements with a single an-
tenna in 2006 indicated a continuous electromagnetic
background promising a low detector threshold [3].
Together with air shower simulations of the radio emis-
sion the background measurements seem to allow the
detection of air showers with a threshold lower than
10 PeV in primary energy. The study presented here is
aimed at long-term studies of the electromagnetic back-
ground for several months to investigate the amount of
pulsed radio frequency interference (RFI) and potential
long-term variations in the continuous background. The
data acquisition of the RICE experiment, constructed to
investigate radio detection methods of high energy neu-
trinos in ice [7], is suited to be extended by four surface
antennas. The RICE DAQ consists of 6 oscilloscopes
with 4 channels each. The sampling rate of each channel
is 1GHz. The dynamic range of the scopes is §2 V with
12 Bit digitization.
Three different kinds of trigger are implemented in
RICE:
1) Unbiased events every 10 min which is a forced
read out of all channels.
2) The RICE simple multiplicity trigger. It is read
out if the signal in four or more RICE antennas
is above a threshold. The threshold is calculated
at the beginning of every run to be 1.5 times
above the RMS of unbiased events. These events
should have no signal over threshold in the surface
antennas.
3) The RICE surface trigger. This is a RICE simple
multiplicity trigger with one or more surface an-
tennas above the threshold as part of the trigger.
This includes RICE triggers where only surface
antennas have a signal over threshold.
The first and the third kind of trigger are of great
interest for the surface radio background studies. The
second trigger strategy is interesting to understand
the in-ice RFI not reaching the surface. It is the most
interesting event class for the RICE neutrino detection.
Fig. 2.
Top view of the IceCube footprint. Two Fat Wire Dipole
antennas were deployed in 350 m distance from the MAPO building
(crosses near the SPASE building). The signals are amplified with
60dB MITEQ AU-4A-0150 low noise amplifiers and connected with
RG59 cables to the MAPO building. Two four arm dipole antennas are
located on the roof of the MAPO building and amplified with 39 dB
MITEQ AU-1464-400.
Fig. 3. Comparison of results from antenna simulations and measured
properties (DATA). The data is a measurement of the voltage standing
wave ratio (VSWR) of the Fat Wire Dipole on the South Pole snow
at the final position of the antenna. The Simulation is made with
EZNEC+ v. 5.0 without ground effects from the snow surface. The
frequency response of the antenna is well described by the simulation.
Including ground effects should even improve the agreement between
simulation and data.
In total four surface antennas were deployed in the
South Pole season 2008/2009 on the IceCube footprint
(Fig. 2). Two Fat Wire Dipole antennas (Fig. 4) are con-
nected to RICE with RG59 signal cables (1505A Coax)
of the decommissioned SPASE experiment (Fig. 2).
These broad band antennas allow for measurements
PROCEEDINGS OF THE 31
st
ICRC, ŁOD´ Z´ 2009
3
TABLE I
ANTENNA POSITION AND CABLE DELAYS TO THE RICE DAQ.
Antenna
x [m]
y [m]
z [m]
cable delay [ns]
MAPO1
47
-28.0
18
144
MAPO2
25
-20
18
129
SPASE1
-135
-366
1
2126
SPASE2
-148
-348
1
2729
Fig. 4. Conducting elements of the Fat Wire Dipole Antenna deployed
near the SPASE building. It is 3 m long and 0.8 m in diameter. The
wires are mounted to a wooden carcass.
in the frequency range from 25-500 MHz. Figure 3
shows measurements of the voltage standing wave ratio
(VSWR) of a Fat Wire Dipole lying on the Antarctic
snow in its final position compared to simulations of
the antenna using the NEC2 software package [4].
The frequency response is already well described by
simulations without snow ground. The connection with
the SPASE cables allows measurements over distances
of several hundred meters from the MAPO building
on the IceCube footprint, housing electronics, which
is a potentially large RFI source, and the antennas
on its roof. To compensate for the attenuation of the
long signal cable (ca. 40 dB at 75 MHz) 60dB low
noise preamplifiers (MITEQ AU-4A-0150) are imple-
mented between antenna and signal cable. The power
is transmitted through the same cable using bias tees.
To avoid saturation of the preamplifiers by the input
power, 25 MHz high pass filters and 300 MHz low
pass filters were implemented between amplifiers and
Fat Wire Dipole antenna. The antennas near the SPASE
building are lying on the snow surface orthogonal to
each other. Thus they measure orthogonal polarization
of the signals. The other two antennas are deployed
on the roof of the MAPO building. These four arm
dipole antennas with an amplification of about -2 dB
at >70 MHz are difficult to calibrate in the surrounding
of the MAPO building and thus will only be used for
event reconstruction (Fig. 6). The signal of the antennas
on the roof of the MAPO building are amplified with
39 dB preamplifiers (AU-1464-400). 300 MHz low pass
Filters are used for these roof antennas, too. Table I
shows the position of the surface antennas in AMANDA
coordinates and their cable delays.
Fig. 5. Picture of the four arm dipole antennas MAPO1 and MAPO2.
Every arm has a length of 0.7 m. The wooden stand is 1.2 m high.
The final position of the antennas is the roof of the MAPO building.
III. ANALYSIS STRATEGIES
From the technical point of view one can divide the
analysis of the data into two parts.
A. Event Reconstruction And Mapping
The reconstruction of the origin of RFI events seen
in more than two antennas will indicate possible noise
sources at the South Pole e.g. the IceCube counting
house or the South Pole Station. A map of these sources
will help to improve air shower detection.
A Â
2
minimization on time residuals is used to recon-
struct the source location of single events. To test the
event reconstruction algorithm, we use signals generated
with a GHz horn antenna in front of the MAPO building.
Considering the cable delays and the antenna positions
(Table I) we are able to make a 3D and time reconstruc-
tion of the events. Figure 6 shows the reconstruction
with horn antenna signal data is working well. It is
accurate within several meters and shows clearly the
horn antenna lies in front of the MAPO building near
the antenna MAPO1. The horn antenna data reconstructs
to the actual position within 50 m with an RMS of
2.3 m. Figure 7 shows a typical noise event triggered
with the RICE surface trigger. The event can be nicely
reconstructed to have its origin in the building of the
10 m Telescope which is the topmost building in Fig. 2.
B. Events and background in the frequency domain
Another Topic is the rate and variation of the different
RFI sources during a whole year of measurement. It
is expected that RFI events have a typical signature in
the frequency domain. This will help to find an ideal
frequency region for a radio air shower detector. The
continuous background is monitored over a whole year
using the unbiased RICE events. One of the highest
peaks on top of the continuous radio background is ex-
pected to be the meteor radar at 46.6 MHz to 47.0 MHz
and 49.6 MHz to 50.0 MHz [5]. It is clearly observable
in the DFT of the recorded data together with a few
other expected sources of filterable continuous narrow
4
JAN AUFFENBERG et al. ELECTROMAGNETIC BACKGROUND AT THE SOUTH POLE
Fig. 6. Distribution of the reconstructed transmitter events in the xy-
plane. The four crosses indicate the position of the antennas on the
roof of the MAPO building (18m above the snow) and on the snow
surface near the SPASE building. The dots are reconstructed positions
of triggered signals from a GHz horn antenna, measured with all four
antennas. The reconstructed events are in very good agreement to the
transmitter position in front of the MAPO1 antenna and demonstrate
the potential of the instrument.
Fig. 7. Example of a triggerd event, seen by all four surface antennas
in the time domain. The signal of the event reconstructs to be coming
from the building of the 10m telescope.
band RF signals.
The DFT of the constant background measured with
the fat wire dipoles is the basis to evaluate the limit
of detectable signal strength. For this it is of great
importance to correct the measured data for the antenna
properties, the high- and low pass filtering, the amplifier
response, and the attenuation of the signal cable. Another
important topic will be to determine long term variations
of the background during one year.
RICE triggered surface events are studied in the fre-
quency domain whether a discrimination of air shower
radio signals from RFI noise is feasible. Most of the
narrow band noise events could e.g. be filtered in a future
air shower detector system.
IV. CONCLUSION
As a part of RICE the four antenna surface detection
system for radio signals, is able to study the conditions
of the radio background in the frequency range from
25-150 MHz and higher at the South Pole. The thresh-
old trigger strategy together with RICE allows for the
estimation of the amount of RFI noise and its sources
on the IceCube site. An analysis of the signals in the
frequency domain shall be used to develop strategies
to suppress the false trigger rate of a radio air shower
detector. Measurements of the continuous background
and its variations are the basis to estimate the energy
threshold of a radio air shower detector in different
frequency bands. The RFI measurements of the surface
antennas will help to understand the signals measured
with in ice radio detection systems.
REFERENCES
[1] T. Huege, H. Falcke, Astron. & Astrophys. 412, 1934, (2003).
[2] http://www.icecube.wisc.edu.
[3] J. Auffenberg et al. ICRC arXiv:0708.3331 (2007).
[4] http://www.nec2.org.
[5] E. M. Lau et al. Radio Sci. 41, RS4007, (2006).
[6] J. Auffenberg et al. Arena, doi:10.1016/j.nima.2009.03.179
(2008).
[7] I. Kravchenko et al. Phys. Rev. D 73, 082002 (2006).
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Neutrino signal from
γ
-ray loud binaries powered by high-energy
protons
Andrii Neronov
∗†
and Mathieu Ribordy
‡
∗
INTEGRAL Science Data Centre, Ch. d’Ecogia 16, 1290, Versoix, Switzerland
†
Geneva Observatory, University of Geneva, 51 ch. des Maillettes, 1290, Sauverny, Switzerland
‡
Ecole Politecnique Federale de Lausanne, 1015, Ecublens, Switzerland
Abstract
. We consider hardonic model of activity of
Galactic
γ
-ray-loud binaries. We show that in such a
model multi-TeV neutrino flux from the source can
be much higher and/or harder than the detected TeV
γ
-ray flux. This is related to the fact that most of the
neutrinos can be produced in
pp
interactions close to
the bright massive star, in a region which is optically
thick for the TeV
γ
-rays. Considering the example of
LS I +61
◦
303, we show that the expected neutrino
signal, detectable within
∼ 3
years of exposure with
ICECUBE, will be marginally sufficient to constrain
the spectral characteristics of neutrino signal.
Keywords
: Neutrinos, gamma-ray astronomy.
I. INTRODUCTION
Recently discovered ”
γ
-ray-loud” binaries (GRLB)
form a new sub-class of Galactic binary star systems
which emit GeV-TeV
γ
-rays. These are high-mass X-
ray binaries (HMXRB) composed of a compact object
(a black hole or a neutron star) orbiting a massive star.
The detection of
γ
-rays with energies up to 10 TeV
from these systems shows that certain HMXRBs host
powerful particle accelerators producing electrons and/or
protons with energies above 10 TeV.
Different theoretical models of
γ
-ray activity of
HMXRBs (see [1] for a review) have at least one
common point: they are all based on the assumption that
the
γ
-ray emission is produced in result of interaction of
a relativistic outflow from the compact object (jet from
a black hole, or wide angle wind from a pulsar) with the
wind and radiation emitted by the companion massive
star. With the exception of Cyg X-1, all the known
GRLBs have similar spectral energy distributions (SED),
peaking in the MeV-GeV energy band. The physical
mechanism of production of the MeV-GeV bump in the
spectra is not clear. It can be the synchrotron emission
from electrons with the energies much above TeV [2],
[3]. Otherwise, it can be produced via inverse Compton
(IC) scattering of the UV thermal emission from the
massive star by electrons of the energies
E ∼ 10
MeV
[4], [5].
The available multi-wavelength data do not allow
to constrain the composition of the relativistic outflow
from the compact object. On one side, the multi-TeV
or 10 MeV electrons, responsible for the production
of the MeV-GeV bump in the SED, could be injected
into the emission region from the
e
+
e
pairs loaded
wind. Otherwise, these electrons can be secondary par-
ticles produced in e.g. proton-proton collisions, if the
relativistic wind is proton-loaded. The only direct way
to test if relativistic protons are present in the
γ
-ray
emission region would be detection of neutrinos of the
muli-TeV energies. A general problem for the estimates
of expected neutrino flux is the uncertainty of the atten-
uation of the
γ
-ray flux in the TeV band, which makes
the derivation of the estimate of the neutrino flux and
spectral characteristics based on the observed TeV
γ
-ray
emission highly uncertain (see [6] for the discussion of
the particular case of LS I +61
◦
303).
In the absence of direct relation between the character-
istics of the observed TeV
γ
-ray and neutrino emission
from a GRLB, the only way to constrain possible
neutrino signal from the source is via detailed modelling
of the broad band spectrum of the source within the
hadronic model of activity. The idea is that the
pp
interactions, which result in the production of neutrinos,
also result in production of the
e
+
e
pairs, which release
their energy via synchrotron, IC and bremsstrahlung
emission. The total power released in the
pp
interactions
determines the overall luminosity of emission from the
secondary pairs. The knowledge of the electromagnetic
luminosity can be used to constrain the power released
in
pp
interactions and hence the neutrino luminosity of
the source.
In this contribution we implement this method of
estimation of neutrino flux to explore the possibility of
detection of neutrino signal from GRLBs. We concen-
trate mostly on the particular example of LS I +61
◦
303
system, because it is the only known persitent GRLB
in the Northern hemisphere, available for observations
with ICECUBE neutrino telescope [7].
II. HADRONIC MODEL OF
γ
-RAY ACTIVITY
In the hadronic model, the primary source of the high-
energy activity of the system are high-energy protons.
The presence of the high-energy protons in relativistic
outflows from compact objects (stellar mass and super-
massive black holes, neutron stars) is usually difficult
to detect, because of the very low energy loss rates of
protons. GRLBs provide a unique possibility to ”trace”
the presence of protons/ions in the relativistic outflows
2
NERONOV AND RIBORDY, NEUTRINO SIGNAL FROM BINARIES
p
p
ν
ν
Be star
Be star disk
Observer
Fig. 1. Mechanism of production of neutrinos in interactions of high-
energy protons ejected by the compact object with the dense equatorial
disk of Be star.
generated by compact objects. The dense matter and ra-
diation environment, created by the companion massive
star provides abundant target material for the protons
in the relativistic outflow. Interactions of high-energy
protons with the ambient matter and radiation fields,
created by the presence of a bright massive companion
star, lead to the production and subsequent decays of
pions. This results in emission of neutrinos and
γ
-
rays from the source and to the deposition of
e
+
e
pairs throughout the proton interaction region. Radiative
cooling of the secondary
e
+
e
pairs leads to the broad-
band synchrotron and IC emission from the source.
Interactions of the high-energy protons with the radia-
tion field produced by the bright massive star in the sys-
tem (e.g. a Be star with the temperature
T
∗
∼ 3 × 10
4
K
in the case of LS I +61
◦
303 and PSR B1259-63)
are efficient only for protons with the energies above
E
p
≥ [200
MeV
/ǫ
∗
]m
p
≃ 2 × 10
16
[ǫ
∗
/10
eV] eV,
where
ǫ
∗
≃ 3kT
∗
is the typical energy of photons of
the stellar radiation. To the contrary, interactions of the
high-energy protons with the protons from the dense
stellar wind can be efficient for the protons of much
lower energies.
In the case when the massive star is a Be-type star,
the rate of
pp
interactions can be highly enhanced if
the high-energy protons are able to penetrate into the
dense equatorial disk known to surround this type of
stars. An obstacle for the penetration of the high-energy
protons accelerated e.g. close to the compact object
into the disk could be the presence of magnetic field,
which would deviate proton trajectories away from the
disk. However, the Larmor radius of the highest energy
protons,
R
L
≃ 4 × 10
12
?
E
p
/10
15
eV
?
[B/1
G]
1
cm,
where
B
is the magnetic field, could be comparable
to the size of the system. Thus, if the magnetic field
in the region of contact between the stellar wind and
the relativistic outflow is not larger than several Gauss,
the highest energy protons can freely penetrate into the
dense stellar wind region.
pp
interactions result in the production of pions,
which subsequently decay onto
γ
-rays, neutrinos and
electrons/positrons. If the Larmor radius of the high-
est energy protons is comparable to the size of the
disk, most of the pions are produced by the protons
propagating toward the companion star. In this case the
neutrino and
γ
-ray emission from the pion decays is
expected to be anisotropic, with most of the neutrino
flux emitted toward the massive star, as it is shown in
Fig. 1. To the contrary, the synchrotron and IC emission
from the
e
+
e
pairs is, most probably, isotropized at
the energies at which the radiative cooling time of
electrons becomes longer than the period of giration in
the magnetic field. Difference in the anisotropy patterns
of neutrino emission and of the broad band emission
from the secondary
e
+
e
pairs should, in principle,
lead to significant difference in the expected orbital
lightcurves of neutrino and electromagnetic emission
from the source.
The flux of
γ
-rays from the
pp
interaction region
is absorbed due to the pair production on the ultra-
violet photon background in the vicinity of Be star.
Maximal optical depth with respect to the pair pro-
duction is achieved at the energies
E
γ
≃ 4m
2
e
/ǫ
∗
≃
0.2
?
T
∗
/3 × 10
4
K
?
1
TeV, where the pair produciton
cross section reaches the maximum
σ
γ γ
≃ 1.5 ×
10
25
cm
2
. The optical depth for the
γ
-rays of this
energy produced close to the masive star can be about
τ
γγ
≥ 10
. At higher
γ
-ray energies,
E
γ
T
∗
≫ (m
e
c
2
)
2
,
the pair production cross-section and, respectively, the
optical depth decrease as
E
1
ln
E
. The attenuation of
the
γ
-ray flux due to the pair produciton becomes small
only at the energies above
∼ 10
TeV.
The power of the absorbed
γ
-rays is re-distributed
to the secondary
e
+
e
pairs, which subsequently loose
their energy onto the synchrotron and inverse Compton
emission. Depending on the magnetic field strength in
the pair production region, the bulk of electromag-
netic emission from the secondary pairs of the energies
10 GeV-10 TeV can be either re-emitted back in the
GeV-TeV energy band, if the inverse Compton loss
dominates, or in the X-ray band, in the case of the
dominant synchrotron loss.
The results of numerical modeling of the spectra
of (isotropic) emission from the secondary
e
+
e
pairs
produced in
pp
and
γγ
interactions are shown in Fig.
2. Our numerical code follows evolution of the spectra
of the secondary particles, produced in interactions of
the high-energy protons with the stellar wind protons
from the dense equatorial disk of Be star. We assume
that the high-energy protons are initially injected close
to the massive star and then escape, together with the
secondary particles produced in
pp
interactions, toward
larger distances. The primary proton injection spectrum
is assumed to be hard, with
p
≃ 0
, so that most
of the protons have the energy close to the cut-off
energy assumed to be
E
cut
= 10
15
eV. Such an almost
monochromatic spectrum of protons injected into the
stellar wind can be produced either if the high-energy
protons originate from a ”cold” relativistic wind with the
bulk Lorentz factor
∼ 10
6
, or because only the highest
energy protons have large enough Larmor radii to be
able to penetrate deep into the stellar wind. The magnetic
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
XMM
INTEGRAL
BATSE
OSSE
COMPTEL
EGRET
MAGIC
VERITAS
Fig. 2. Broad band spectrum of emission from secondary
e
+
e
pairs
produced in
pp
interactions close to the surface of Be star, calculated
assuming almost monochromatic proton injection spectrum with
p
=
0
,
E
p,cut
= 10
15
eV. Upper panel shows the injection spectrum of
e
+
e
pairs (dashed line) and the spectrum formed in result of cooling
via synchrotron, IC and bremsstrahlung emission as well as Coulomb
energy loss. In the lower panel, thin red solid, dashed and dotted lines
show, respectively, the synchrotron, IC and Bremsstrahlung emission
from the pairs. The black thick solid line shows the overall broadband
model spectrum.
field is supposed to be characterized by the radial profile
B = B
0
(D/R
∗
)
α
B
with
α
B
= 1
and
B
0
= 5
G.
Fig. 3 shows modifications of the spectrum introduced
by (a) possible presence of an anisotropic
γ
-ray emission
component produced by the
π
0
decays and (b) by the
strong anisotropic attenuation of the
γ
-ray flux by the
pair production. Both effects are expected to be strongly
variable with orbital phase. It is not possible to estimate
the real value of average
τ
γγ
as long as the details of
the 3-dimenisonal geometry of the emission region and
the mutual orientation of the extended emission region,
of the Be star and of the observer are not known. Taking
into account this uncertainty, we choose the minimal
value of
τ
γγ
at which the absorbed spectrum does not
violate the upper bound on the flux at the energies above
∼ (several)
TeV, found in VERITAS observations of
the source.
III. ESTIMATE OF THE NUMBER OF NEUTRINO
EVENTS FOR ICECUBE
Within the hadronic models of activity, the MeV-GeV
bump in the spectral energy distribution of GRLBs is
produced by the emission from the secondary
e
+
e
pairs
from the
pp
interactions. This fact enables an estimate
of the neutrino luminosity of the source, based on its
MeV-GeV band luminosity. The only uncertainty of such
an estimate is that the synchrotron emission from the
secondary
e
+
e
pairs in the MeV-GeV energy band
is, most probably, isotropic, while the neutrino and
π
0
decay
γ
-ray emissions are not.
XMM
INTEGRAL
BATSE
OSSE
COMPTEL
EGRET
MAGIC
VERITAS
Fig. 3.
Broad band spectrum of emission from
pp
interactions,
calculated assuming the same parameters as in Fig. 2, but considering
the possibility of strong neutrino emission from the source. Notations
for the
γ
-ray spectrum are the same as in the lower panel of Fig.
2. Green thick solid line shows the spectrum of neutrinos.The black
thick solid and dashed lines show the overall broadband model spectra,
calculated Solid line shows the case of almost radial escape of the
γ
-
rays, dashed line corresponds to the attenuation of the
γ
-rays emitted
normally to the direction from the massive star. Dotted thick black line
shows the modification of the broad band spectrum, if the emission
from the tertiary
e
+
e
pairs produced via the
γγ → e
+
e
process
is taken into account.
Although the total power of neutrino emission can
be estimated from the MeV-GeV luminosity of the
γ
-
ray-loud binary, modelling of the broad band emission
spectrum of the source gives only mild constraints on the
neutrino emission spectrum: acceptable models of the
broad band spectra can be found assuming the initial
proton injection spectra ranging from
E
2
powerlaw
to almost monochromatic injection spectra (see [8] for
details). The properties of the neutrino signal can be used
to distinguish between different primary proton spectra,
if one is able to derive the information on the energies
of the detected neutrinos from the observational data.
Extraction of information about the properties of the
source neutrino spectrum from the data is complicated
by the fact that (a) the neutrino telescopes measure only
the energies of the detected muons, rather than that
of the primary neutrinos, (b) the flux of the primary
neutrinos and the energies of the secondary muons are
attenuated by the effect of propagation through the Earth.
The differential induced muon spectrum at the detector,
which strongly depends on the source declination
δ
,
could be obtained after propagation of neutrinos up
to the interaction point and further propagation of the
muons to the detector. A semi-analytical method of
calculation of the muon spectra, which is based on the
use of the muon effective area and the knowledge of the
details of the detector, namely the angular and energy
resolution was developed in the Ref. [8].
The results of such semi-analytical calculation of the
atmospheric background-subtracted muon spectra, for
the particular case of neutrinos from LS I +61
◦
303, are
presented Fig. 4. The exposure time is taken to be 3 years
of running the full ICECUBE array. For comparison,
we also show by the solid thick line the level of the
signal which is
5σ
above the atmospheric background.
Dotted line show the detected muon spectrum for the
4
NERONOV AND RIBORDY, NEUTRINO SIGNAL FROM BINARIES
Fig. 4. Background-subtracted cumulative muon spectra
N (E
µ,thr
)
,
expected after the 3-year ICECUBE exposure for the three model
neutrino spectra of LS I +61
◦
303, discussed in the previous sections
(error bars of signal muon spectra are the sum in quadrature of
statistical errors of signal + atmospheric neutrino background). Dotted
line shows the spectrum for the proton injection spectrum with the
spectral index
= 2
. Thin solid line corresponds to the initial proton
injection spectrum with
= 1
, while the dashed line is for the
proton injeciton spectrum with
= 0
. Thick solid line shows the
5 σ
excess above the atmospheric neutrino backgorund (the ”discovery
threshold”). The bin radius is set to
ψ = 1.3
◦
.
model with the proton injection spectrum with spectral
index
p
= 2
, thin solid line corresponds to the proton
injection spectrum with
p
= 1
and cut-off at the same
energy, while the dashed line corresponds to
p
= 0
.
In all three cases the cut-off energy is assumed to be
E
cut
= 1
PeV.
Inspecting the muon spectra shown in the upper panel
of Fig. 4, one can see that softed proton injection spec-
trum results in a slight excess of muon events at lower
energies. If the overall normalization of the neutrino
flux would be known, measurement of the spectrum
of muon events would allow to constrain the spectrum
of the primary protons in the source. However, taking
into account the uncertainty of the overall normalization
of neutrino flux introduced by the uncertainty of the
anisotropy pattern of neutrino emission, one can find
that the statistics of the signal will not be enough for
such a task. This is illustrated in the lower panel of Fig.
4, where a comparison of the shapes of the muon spectra
is shown. If one assumes the same total number of muon
events, the difference in the spectra for the three models
is always within
∼ 1σ
errorbars, over the entire energy
range.
IV. CONCLUSIONS
We have estimated the neutrino flux from GRLBs
expected within the hadronic model of activity of these
sources. Within such a model, the measured spectral
characteristics of
γ
-ray emission from the source in
the TeV energy band are not directly related to the
spectral characteristics of the neutrino emission, because
of absorption of the TeV
γ
-rays on the thermal photon
background produced by the massive star in the system.
The uncertainty of the calculation of the attenuation of
the TeV
γ
-ray flux introduces a large uncertainty to the
estimate of the neutrino flux based on the measured TeV
γ
-ray lfux.
Taking into account this uncertainty, we have adopted
a different approach for the estimate of the neutrino
flux from a GRLB. Namely, we have noted that the
energy output of proton-proton interactions, and hence
the neutrino flux, can be instead constrained by the
broad-band spectrum of the source.
Although the observed bolometric luminosity of the
source (i.e. the height of the MeV-GeV bump of the
SED) constrains the overall neutrino luminosity, the
shape of the neutrino spectrum and the overall normal-
ization of the neutrino flux are only mildly constrained
by the multi-wavelength data. Taking into account these
uncertainties of the neutrino emission spectrum, we have
estimated the expected number of neutrinos which will
be detected by the ICECUBE, assuming that the neutrino
flux saturates the upper bound imposed by the observed
γ
-ray flux in the MeV-GeV energy band. Considering
the particular example of LS I +61
◦
303, we have found
that if the spectrum of high-energy protons in the source
extends to the PeV energies, the source could be readily
detectable with 3 years of exposure with ICECUBE.
We have also explored the potential of the full ICE-
CUBE detector for the measurement of the spectral
characteristics of the neutrino signal from LS I +61
◦
303.
We find that in the case when the neutrino flux is at
the level of the upper bound imposed by the observed
MeV-GeV
γ
-ray flux, exposure time much longer than
3 years is required to constrain the spectral index of the
primary high-energy proton spectrum via observations
of neutrino signal in ICECUBE.
REFERENCES
[1] I.F. Mirabel, Science, 312, 1759, (2006).
[2] M. Tavani, J. Arons, Ap.J., 477, 439 (1997).
[3] V. Bosch-Ramon et al., A&A, 447, 263 (2006).
[4] L. Maraschi, A. Treves, MNRAS, 194, 1 (1981).
[5] M. Chernyakova, A. Neronov, et al. MNRAS, 367, 1201 (2006).
[6] D.F. Torres, F. Halzen, A.Ph., 27, 500, (2007).
[7]
http://icecube.wisc.edu/
[8] A. Neronov, M. Ribordy, Phys. Rev. D79, 043013 (2009).
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
1
Acoustic sensor development for ultra high energy neutrino
detection
Matt Podgorski
†
and Mathieu Ribordy
∗
∗
High Energy Physics Laboratory, EPFL, CH - 1015 Switzerland
†
RWTH Aachen, visiting EPFL
Abstract
. The GZK neutrino flux characterization
would give insights into cosmological source evo-
lution, source spectra and composition at injection
from the partial recovery of the degraded informa-
tion carried by the ultra high energy cosmic rays. The
flux is expected to be at levels necessitating a much
larger instrumented volume (
>
100 km
3
) than those
currently operating. First suggested by Askaryan,
both radio and acoustic detection techniques could
render this quest possible thanks to longer wave
attenuation lengths (predicted to exceed a kilometer)
allowing for a much sparser instrumentation com-
pared to optical detection technique.
We present the current acoustic R&D activities at
our lab developing adapted devices, report on the
obtained sensitivies and triangulation capabilities we
obtained, and define some of the requirements for
the construction of a full scale detector.
Keywords
: Ultra high energy neutrinos. Acoustic
detection techniques. Acoustic sensor studies.
I. INTRODUCTION
The IceCube detector [1] may well soon identify the
first ultra high energy neutrino of cosmogenic origin,
following interactions of ultra high energy cosmic rays
with the cosmic microwave background [2]. Predic-
tions for the cosmogenic neutrino flux, i.e. neutrinos
from photo-disintegration, is at levels of the order of
EdN/dE ∼ 10
17
s
1
cm
3
sr
1
at
E = 10
18
eV,
resulting in 0.01 - 1 event / year / km
3
in ice [3]. These
predictions strongly depend on the primary cosmic ray
composition [4]. Currently, the situation is uncertain:
While the observed correlation of UHE CR sources with
the AGN distribution by AUGER [5] hints toward a light
composition (and in this case we lie close to the upper
flux predictions), dedicated AUGER composition studies
favor a composition turning heavier at UHE [6]. GZK
neutrinos are astronomical messengers keeping track
of the original CR direction, GZK interactions mostly
occur close to the source. In case of the existence of a
few UHE cosmic accelerators located close-by (Gpc),
the detection of a substantial flux of GZK neutrinos
from these directions using a multi-messenger approach
would allow the possibility of pinpointing the nature of
these CR accelerators.
The characterization of the GZK neutrino flux spec-
trum, and thus the recovery of the degraded informa-
tion information carried by UHE CR, would allow the
delineation of cosmological source evolution scenarios
from source spectrum characteristics. To fulfill this goal,
the event detection rate should be vastly increased.
Therefore a much larger volume should be instrumented
with an adequate technology for the detection of ultra
high energy neutrino interactions. Two novel detection
methods have been proposed, following signatures first
discussed by Askaryan [7], [8]. An interacting neutrino
emits a coherent Cherenkov pulse in the range of 0.1-
1 GHz [9] close to the shower axis and a thin ther-
moacoustic pancake normal to the shower axis. Both
detection techniques are currently exploited by several
detectors. In ice, both radio and acoustic emissions have
rather large theoretical attenuation lengths [10]. While
this has been convincingly demonstrated for the radio
emission, it is still a work in progress for the acoustic
emission and is one of the main goals for the South
Pole Acoustic Test Setup (SPATS) [11]. With the data
collected by the SPATS array, the sound speeds w.r.t. the
depth have been determined with great accuracy, meeting
theoretical expectations [12] and S-waves have been
found as well. Unknown, however, remains the exact
nature of the local source of noise and the exact value
of the attenuation length. Newest experimental results
hint toward a reduced pressure wave attenuation length
on one hand and demonstrate favorable noise level below
10 mPa on the other hand [13].
Contrary to salt and water, ice is unique. It allows
the detection of three distinct signatures accompanying
a neutrino event: Optical Cherenkov light, coherent radio
Cherenkov and thermoacoustic emissions, thus firmly
establishing the event origin by a strong background re-
duction. A possible layout for the hybrid instrumentation
of a large volume of order of 100 km
3
at the South
Pole would consist of strings deployed one kilometer
apart down to a depth of 2 km (radio and acoustic
attenuation lengths strongly vary with temperature and
are decreasing with depth). Given the topologies for
the radio & acoustic emissions, a string should be
densely equipped with radio and acoustic devices with
an option of supplementing it with PMT devices for
optical detection.
10 10
3
interacting GZK neutrinos in
100 km
3
instrumented volume can be expected after 10
years. The cosmogenic spectrum could be characterized
(and consequently insights into the underlying physics),
provided a high detection efficiency, a deep knowledge
of the local source of noise and good energy resolution.
Thermoacoustic models and Monte Carlo simulations
2
MATHIEU RIBORDY
et al.
ACOUSTIC SENSOR R&D
predict that a signal from a neutrino with an energy
E = 10
18
eV will typically have an amplitude of 10
mPa at a distance of one kilometer [14] (to be rescaled
for finite attenuation length). To keep a good S/N ratio,
the sensitivity of the devices should be at the sub mPa
level. Also, a good pointing resolution may serve the
purpose of UHE point source search. Given the giant
array layout introduced above, an acoustic signal will
be recorded by a small number of acoustic devices. It
is therefore desirable to design acoustic sensor devices
with pointing capabilities of their own.
In the next section, we present R&D activities which
are taking place at our lab in regard of sensor design
and construction and discuss its performances.
II. R&D ACTIVITIES
The design and construction of a multi-channel sensor
was conducted at our lab, which use piezo transduc-
ers (PZT) as sensitive elements. A noise level level
S/N < 5
mPa (
S/N ≡ S
RMS
/N
RMS
) and a good
angular resolution were demonstrated, suggesting the
possibility for excellent vertex localization combining
the responses from all sensor hits. The design, which
must still be improved to meet our design goals, could
eventually allow for diffuse acoustic noise reduction
through spectral shape analysis and accurate energy
estimate of physical events.
The setup for conducting the R&D activities consists
of a bath, topped with a support structure for one
absolutely calibrated hydrophone (Sensortech SQ03),
one homemade sensor and one transmitter. A datataking
LabView program interfaced to a National Instrument
board is used for analog response digitization (12 bits
×
12 channels, total 1.25 MHz), transmitter pulse gener-
ation and
4π
relative orientation between the transmitter
and the sensors for automatized sensor profiling.
The experimental setup shown in Fig. 1 consists of the
homemade hydrophone, a transmitter and the reference
hydrophone. In the following, two different electric sig-
nal shapes have been considered: a damped sin pulse and
a gaussian pulse, resulting in a tripolar pressure pulse
(the neutrino-induced thermoacoustic pulse is bipolar).
SQ03 hydrophone
homemade sensor
large water tank
transmitter
Fig. 1. Experimental setup.
A. Acoustic sensor design
The sensor consists of an aluminium pressure vessel
housing 4 channels to provide triangulation capabilities.
The noise level at the amplifier input to 130 nV, reached
frequency [Hz]
10000 20000 30000 40000 50000 60000 70000 80000 90000
pressure [mPa; S/N=1]
1
10
2
10
piezo 1
piezo 2
piezo 3
piezo 4
RMS piezo noise: 3.2mV
sensitivity SQ03: 7.94
μ
V/mPa
Fig. 2. Pressure sensitivity as a function of frequency of damped sin
transmitted pulses normalized to
S/N = 1
ratio.
at the expense of some bandwidth reduction, peaking at
22 kHz with
∼90
dB amplification and sharply decreas-
ing below 10 kHz and above 40 kHz. Whether that is
optimal has to be studied further. It is manufacturable
at relatively low cost. While aluminium is an adequate
medium for use in a liquid water bath, it will be replaced
by steel for application in ice (more adequate given both
impedance and resistance to pressure).
B. Sensitivity calibration
Signals with peak frequencies in the range 10 - 90 kHz
were recorded with a sampling rate of 330 kHz. A strong
frequency correlation between the transmitted pulse and
the sensor response was observed. Due to the finite size
of the bath tub and given the sound speed velocity in
water, only the first 150
µ
s following the pulse arrival
time were analysed in what follows to avoid reflexion
artefacts.
With the collected data from the 4-channel sensor and
from the commercial hydrophone, the absolute pressure
sensitivity was calculated in the time domain using RMS
values for signal and noise. Fig. 2 shows the absolute
pressure sensitivity (defined as
S/N = 1
) w.r.t. the
dominant frequency of the sent signals.
The measurements demonstrate the importance of the
state of surface coupling the PZT to the housing: The
polished surfaces for piezos 1 and 3 show a response
∼2
times stronger than piezos 2 and 4 coupled to the
housing through porous surfaces.
C. Triangulation
Time resolution is essential for triangulation and
therefore a digitization frequency of 100/200 kHz
is required in order to reach cm resolution in alu-
minium/steel. This suggests that a sensor design should
include digital electronics with at least 200 kHz sam-
pling rate per channel
1
, in order to reach 0.5-5 ms
1
100 kHz (and therefore 200 kHz sampling rate) is by coincidence
the value above which the ice attenuation length drops quickly and
roughly the extension of the neutrino-induced thermoacoustic pulse
spectrum.
PROCEEDINGS OF THE 31
st
ICRC, ŁO´ DZ´ 2009
3
pulse start time resolution (depending on amplitude)
which roughly corresponds to 2
◦
-20
◦
(
4π/10
2
-
4π/10
4
)
angular resolution with the current multi-channel sensor.
The design of a new digital (0.2 MHz/channel) 4-
channel amplifier board has been started, with long range
communication protocols. It does not yet include trigger
logic. Digitization is necessary for a viable acoustic
detector design in order to avoid losses in km long cables
(of order of 3 dB/100 m in high quality cables) and thus
keep both good sensitivity and time resolution. Once in
operation, this will allow to define the requirements for
future efficient trigger concept at the sensor level.
degrees
−200 −150 −100 −50
0
50
100
150
200
time [ms]
0.62
0.63
0.64
0.65
0.66
0.67
−3
×
10
piezo 1
piezo 2
piezo 3
piezo 4
Graph
Fig. 3.
Time of first signal maximum as a function of the polar
orientation of the sensor.
Figure 3 demonstrates triangulation capabilities. Cou-
pling between channels was found to provide in any
sampled sensor positions 2 channels with signal within
a factor 2 of the channel with the highest response. The
resolution will nevertheless depend on the individual
signal-to-noise ratios, but it shows potential for vertex
reconstruction with a single sensor module.
D. Outlook
First positive results in the time domain were ob-
tained. Absolute sensitivities are currently analyzed in
the frequency domain. A second 4-channel sensor of
similar design in a steel housing will be soon equipped
with digital electronics. Further sensor tests are foreseen
to happen next. Low temperature behavior test will be
conducted in the laboratory and at large distances and
depths in the lake Geneva (∼ 400 m depth) to avoid
reflexions, assess the acoustic noise characteristics and
probe the sensor design.
III. CONCLUSIONS
Acoustic neutrino detection techniques should be
further developed, pushing the sensitivity at sub mPa
levels, together with the characterization of noise sources
which may impede the applicability of the technique.
Noise measurement in an open media (water / ice) are
required to characterize the noise rate and its spectral
shape in order to investigate improved trigger schemes
relying on signal processing within the sensors. These
developments have been started with digitization board
in the sensor, a necessary step for a viable acoustic array
design, where signal attenuation along km transmission
cable is excluded.
While it seems clear that sub mPa sensors should
be designed, the uncertain detection conditions at the
South Pole make predictions concerning the detection
efficiency difficult. The potential can be dangerously
spoiled in the case the local source of acoustic noise
mimick a neutrino event. Further dedicated studies are
on-going to ensure that it could be possible to distinguish
the event origin with high efficiency. The deployement
of an acoustico-radio-optical hybrid detector would con-
stitute a welcome option, allowing to reduce further
possible background noises. Also, complementing the
hybrid radio-acoustic strings with a few optical DOMs
such as in IceCube would allow to unambiguously
tag neutrino events (at these energies, given a
>
100m
absorption length in the ice at these depths [15], photons
may likely accompany a radio-acoustic signal in the case
of a neutrino event).
REFERENCES
[1] R. Abbasi et al., Nucl. Instr. Meth.A601:294-316,2009.
[2] V. Berezinsky and G. Zatsepin, Phys. Lett. B 28 (1969) 423.
[3] R. Engel, D. Seckel & T. Stanev, Phys. Rev. D
64
(2001) 093010.
[4] L. A. Anchordoqui, H. Goldberg, D. Hooper, S. Sarkar &
A. M. Taylor, Phys. Rev. D
76
(2007) 123008.
[5] J. Abraham et al. [Pierre Auger Collaboration], Astropart. Phys.
29
(2008) 188. [Erratum-ibid.
30
(2008) 45]
[6] K. H. Kampert [Pierre Auger Collaboration], AIP Conf. Proc.
1085
(2009) 30.
[7] G. Askaryan, JETP 14 (1962) 441; G. Askaryan, JETP 21 (1965)
658.
[8] G. Askaryan, Sov. J. Atom. Energy 3 (1957) 921.
[9] E. Zas, F. Halzen & T. Stanev, Phys. Rev. D45 (1992) 362.
[10] B. Price and L. Bergstrm, Appl. Opt. 36 (1997) 4181; L.
Bergstrm, B. Price et al., Appl. Opt. 36, 4168 (1997); B. Price,
J. Geophys. Res.111, B02201 (2006).
[11] S. Boeser et al., arXiv:0807.4676 [astro-ph]; S. Boeser et al., Int.
J. Mod. Phys. A21S1 (2006) 107.
[12] J. Vandenbroucke et al., arXiv:0811.1087 [astro-ph].
[13] F. Descamps et al. [IceCube coll.], in these proceedings.
[14] D. Besson et al., Int. J. Mod. Phys. A21S1 (2006) 259.
[15] M. Ackermann et al., J. Geophys. Res. 111, D13203 (2006).