-
Notifications
You must be signed in to change notification settings - Fork 5
/
Copy pathdaq.tex
56 lines (51 loc) · 3.38 KB
/
daq.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
The DAQ back-end software will merge data to form events from the TPC,
photon detector and beam detector readouts using the
artDAQ data acquisition toolkit using a farm of commercial
computers connected with an Ethernet switch. ArtDAQ is
in use on several experiments at Fermilab. We are using it
on the 35 t prototype, so we will have considerable
experience by the time of the CERN test.
The data collection for the CERN test will operate in a mode
similar to that forseen for the underground detectors. In order
to collect data from non-beam interactions such as proton decay
candidates or atmospheric neutrinos, data will be continuously
read in to the artDAQ data receiver nodes and processed through
the artDAQ system in quanta corresponding to time intervals fixed
from the time of the beginning of the run. These are then
transferred through the switch to a set of event building nodes
which work in parallel, each node receiving all the data from all
the detectors for the time intervals it is responsible for processing.
There will be 32 parallel incoming data streams from the TPCs
and 16 streams from the photon detectors. There will be an additional
stream from the trigger board (similar to the board used by 35~t detector)
which will receive input of the spill
gate, warning of extraction, and pattern-unit bits from cosmic muon trigger counters
and other beamline instrumentation which are described in Sections \ref{calibration} and
\ref{beaminstrument}, respectively.
Synchronisation across all the input sources is essential in order
that artDAQ can bring together the data from the input streams correctly for
processing by the event building nodes. The data receiver nodes will provide
overlap by repeating the data at the boundaries of the time intervals so
that a particle whose data spans two time intervals can be collected.
The time synchronization is provided to the RTM back-module on the LArTPC
readout crates, to the SSP photon detector readout and to the trigger board from
a GPS based time synchrononization distribution system originally designed
for the NOvA experiment. This system includes functionality to calibrate and
correct for the cable delays, and to send synchronization control signals to
the readout at predetermined times.
The event building nodes will select time regions of interest within the time
intervals they are processing and form these into events to be written to
disk. The algorithms to select the events may be as simple as looking for
a trigger bit in the trigger board data stream, or may involve looking
for self-triggered events in the LArTPC data. An aggregation task, which
is part of artDAQ will handle the parallelized event building processes by
merging the output events into a single stream and writing them to disk.
To avoid oversized output data files, when a predetermined file size is reached,
the aggregator will switch to writing to a new file.
Improved versions of the software systems which are being prototyped at the
35 t test will be available for the CERN test including (a) Run control which
controls and monitors the DAQ processes and allows run starts and stops to
be performed by the operator (b) online monitoring (c) slow control of
voltages and temperatures being used by the electronics. The trigger board includes facilities
for generating calibration pulses and for identifying the event times of
the calibration events.