\subsection*{ 2.B Nontechnical statement} ``Particle Physics'', also called ``High Energy Physics'', is that branch of physics which tries to elucidate the properties of the smallest constituents of matter and the forces between them (their ``interactions''). The last twenty years have seen a revolution in our understanding, with the emergence of a picture generally referred to as the ``Standard Model'' (SM) of particle physics. This theoretical description has been remarkably successful: Even though many sophisticated, high precision experiments have been performed to test it with the hope to find deviations from it, it has withstood all attempts to invalidate it. On the other hand, because of theoretical shortcomings in the model, we know that the SM is not the complete picture. It can only be an (obviously very good) approximation of a more general theory, an ``extension'' of the Standard Model. This is why theorists in particle physics look for extensions of the SM which would provide a unification of all forces in nature and cure the problems of the SM. The main experimental thrust in our field of research is to study particle collisions at the highest possible energies, in the hope to see deviations or new phenomena not predicted by the SM. At present (and for the next five years or so), the machine which offers the highest energy is the ``Tevatron collider'' at FNAL (Fermi National Accelerator Laboratory near Chicago), which allows the study of proton-antiproton collisions at an energy of 1.8 TeV. % i.e. nearly three times as high as the energy at the CERN collider %where this kind of experiment was pioneered. The FSU High Energy Physics group has been participating in one of the two large experiments at this collider, named ``\dz'' after the area in the collider in which it is installed. With the data taken from 1992 to 1995, this experiment has performed many tests of the SM, as well as searches for new phenomena ``beyond the standard model'' as predicted by the SM extensions mentioned above. So far, all the observations are in excellent agreement with the standard model, including the long-awaited observation of the sixth quark predicted by the standard model, the ``top'' quark, in 1995 [ref.5]. Since no deviations from the SM have been observed so far, there is a consensus that higher beam intensities are needed in order to allow the search for the possibly very rare new phenomena which would indicate the new physics expected from the SM extensions. The Fermilab proton-antiproton collider will therefore be upgraded to provide higher beam intensities. This, in turn, makes it necessary to also improve the detectors [ref.6]. In particular, the ``tracking detectors'', which allow to record the tracks of charged particles produced in the collisions have to be replaced since the old ones would not be able to cope with the high intensities expected. One of these new detectors is a ``silicon microstrip detector'' which allows to measure the position of charged particles with very high precision. Prototypes of these new detectors have been constructed and tested in a testbeam at Fermilab, starting last summer and through fall of 1997. The FSU group has played a major role in this test effort, both in the design and installation of the test setup, and by providing equipment and data acquisition software, as well as help with data taking and analysis. At the high beam intensities, the rate of interaction between the protons and antiprotons that collide is very high; at the anticipated intensities, we expect about 10 million collisions per second. Most of these interactions are of no interest to the physics program of the experiment -- it is the rare events which are most likely to provide new and interesting information. For example, only one in 10 billion interactions will produce a top quark [refs. 7,8]. Since the experiment can only record about 50 events per second, we would miss the interesting events if we did not have a way to decide very fast which ones to record. This decision is done by a system of very fast electronics called the {\sl ``trigger''.} \subsection* { 2.C Context} For more than a year I have been the leader of a working group studying the benefits and feasibility of using the new silicon microstrip detector in the trigger, i.e. in the decision at data taking time on whether to record a particular event or not. Such a trigger could be useful in increasing the sensitivity of the experiment for finding rare events. A recent review of this project has concluded that the physics benefit of such a trigger would be sufficiently great that its cost (about 1.5 M\$) could be justified. Therefore the collaboration has decided to go ahead and try to design and implement such a trigger, and to find ways to fund it. \subsection* { 2.D Technical Statement} Together with physicists from three other universities, I am presently writing a proposal to NSF to request funding for the development and construction of a new trigger processor for the \DZ experiment. The proposed trigger preprocessor will use the signal from the new Silicon Microstrip Tracker to improve the track momentum resolution of the trigger and to tag collisions in which long-lived b-quarks are produced. The study of events containing b-quarks can help in addressing many fundamental questions in particle physics, e.g., CP violation, the top quark, and the search for the Higgs boson. The proposed instrument will add significantly to the physics capabilities of the \DZ detector in these areas. It has to process the input signals within 50 microseconds and thus requires the design of very fast digital electronics and the use of Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). The FSU contribution to this project covers the FPGA's (field programmable gate arrays) in the hitfinder part of the trigger processor. This part receives the data from the Silicon Microstrip Tracker, finds clusters in them and organizes the data for subsequent track fitting. This is a particularly challenging part since the hitfinding has to be done in real time during the read-out of the data (information from the 800 000 silicon strips is read out in 7 microseconds). FPGA's are the only technique that allows such fast processing. FSU's task includes development and implementation studies of the algorithms, design of the circuit in the FPGA, circuit simulations, and acquisition of the FPGA's. Beyond the design and implementation of the hardware, the FSU group's responsibility will also include leadership for the project and the development of trigger monitoring and simulation software. \subsection* { 2.E Progress} The preparation of the \DZ experiment (planning, design, construction, testing, calibration,..) has been going on since 1983. The FSU group joined this project in 1985 and has made substantial contributions to it. Ever since coming to FSU in fall of 1985, I have spent I spent my summers at national laboratories working on this edperiment, each time together with two or more FSU students. During the summer of 1986 at Brookhaven National Laboratory (Long Island, NY), we participated in the construction and testing of the first two prototype modules for the calorimeter. The summers of 1987 through 1997 were spent at FNAL, where we worked on the preparation and installation of the calorimeter test set-up, performing and analyzing calorimeter test and calibration measurements, installation and commisioning of the detector at the Tevatron collider, data taking at the collider, and data analysis, and (last summer) doing Silicon detector tests and studies for the silicon tracker trigger. This experiment has been extremely successful; it has provided a wealth of new data which have proven extemely useful for further tests of the Standard Model. Four FSU students have done their PhD dissertations based on data taken in this experiment. Two more are currently working on their PhD thesis, and two more have begun their research work in this project. After the upgrade program which is currently in progress, the capabilities of the \DZ detector will be strongly enhanced, making it a very powerful tool in our quest to understand the basic laws governing matter. My plan is to devote most of my research effort, and in particular all of my summer, on work on this trigger processor. Simon Foo, a faculty member in the FSU electrical engineering department, has agreed to join me in this project, to help us physicists with his electronics expertise. We also hope to recruit an engineering graduate student to work with us, and to make this design work part of his thesis. It is estimated that the total time to design, build and implement this new trigger processor will be about two years. Having the possibility to work on it full-time during the summer will help substantially in driving the design work forward. The work on this trigger processor will certainly lead to a technical publication. Beyond that, the improved performance of the detector will increase its potential to discover new physics. We are looking forward to the first physics results with the upgraded detector in 2000. \subsection* { 2.F Plans for outside proposals} Our research work on \DZ is funded by a grant from DOE (about 1M\$ per year). Work on the new silicon tracker trigger processor was not included in our last proposal to DOE, and therefore no funding for additional cost arising form this new work is available in our present contract. As mentioned above, a consortium of four US universities (Boston University, Columbia University, SUNY at Stony Brook, and FSU) is preparing a proposal to NSF within the MRI (major research instrumentation) program, requesting funding for the construction of this trigger processor. The MRI program does not foresee funding for salary of faculty members or travel. The FSU group is also planning to request supplemental funding from DOE, to cover at least some of the additional cost arising from this new project. \subsection* { 2.G References} from pac2 doc: Having a good trigger system is extremely important for a hadron collider experiment because the event rate from collisions is many orders of magnitude higher than the rate at which events can be recorded. After the upgrade of the Fermilab Tevatron collider, the collision rate in run II seen by the \DZ detector will be effectively equal to the beam crossing rate, i.e. several MHz. This is about a factor of $10^5$ higher than the tape writing speed which will be less than 50 Hz. The role of the trigger is to reject as much of the overwhelming ``minimum bias'' background of the ``typical'' events as possible while maintaining high efficiency for the ``interesting events'' that one wants to record for off-line analysis. Even after the reduction of unwanted events by the trigger, the recorded event sample is still dominated by background. For example, the total cross section for \tt\ production is about 6 pb, to be compared with the total \pp\ interaction cross section of about 80 mb, so even after a reduction of the background by a factor of $10^5$ the fraction of \tt\ events in the recorded event sample is at best 1 in $10^5$. In the upgraded \DZ detector, the rejection of background events is done by a three-stage trigger system, denoted level 1 (L1), level 2 (L2) and level 3 (L3). The L1 trigger has the task of reducing the event rate to 10 kHz, which is the maximum rate at which the information from the SMT (the silicon microstrip detector) can be digitized. The subsequent trigger levels then need to reduce the rate so that it fits into the bandwidth of the next stage: L2 is designed to be able to receive 10 kHz and has to reduce this input rate to an output (accept) rate of at most 1 kHz, which is the maximum rate that L3 can handle. The final triggering stage, L3, then has to match its output (accept) rate to the rate at which events can be recorded (20Hz). If the input rate offered to one of the triggering stages is close to the maximum admissible one, deadtime losses become important. Such deadtime losses effectively reduce the data taking efficiency indiscriminately for background and signal events. In practice we try to avoid running under such conditions because the deadtime losses can depend on luminosity, beam conditions (``cleanliness'' of the beam), and are difficult to measure and model. If the trigger rates become too high one chooses to prescale some of the triggers which has the same effect on the efficiency (i. e. loss of both background and signal events), but is better under control. For signals whose cross section is very small, it is clear that prescaling should be avoided at all costs. This can only be done if the trigger system has access to sufficiently many powerful tools which allow selective rejection of the unwanted background. In our proposal to the PAC \cite{pac1} we showed that the Silicon Track Trigger (STT) provides powerful tools for rate control by background rejection: due to its ability for $b$ - tagging at L2, it allows trigger rate reductions by substantial factors (two and higher) with negligible, or very small, loss in efficiency for a wide range of channels with $b$ quarks in the final state, e.g. top production (both \tt\ and single top), Higgs production with decay into $b \bar b$, as well as for $b$ - physics studies. Furthermore we argued that due to the better momentum resolution, trigger thresholds in $p_t$ would be sharpened, thus leading to a reduction in trigger rate. In the present document we elaborate on some of these issues and respond to the request of the PAC for a more quantitative assessment of the ultimate physics benefit of the STT in terms of measurements and discoveries. \section {Search for the Higgs Boson} \subsection{Higgs discovery prospects at the Tevatron} The most promising process in which to observe Higgs boson production at the Tevatron is associated production with vector bosons. In the recently held Higgs -- SUSY workshop at Fermilab \cite{hswkshop}, the following conclusions were reached about the prospects of finding the SM Higgs at the upgraded Tevatron:\cite{conway} \begin{itemize} \item there is no single golden discovery channel; combining all channels, and both experiments (CDF and \DZ), is crucial; \item both experiments need to {\bf optimize trigger efficiency, $m_{b \bar b}$ resolution, $b$ tagging efficiency.} \item implicitly, all studies assume that both experiments have $b$ - tagging at the trigger level. \end{itemize} Under these conditions: \begin{itemize} \item if there is no SM Higgs Boson, CDF and \DZ can exclude it at 95\% CL up to 120 GeV mass in Run II, and with 10 fb$^{-1}$ can extend exclusion up to 190 GeV \item if there is a SM Higgs Boson, with 30 fb$^{-1}$ it can be discovered at the 3 to 5 $\sigma $ level, up to 190 GeV mass. In this document we have provided further arguments to strengthen the physics case for the implementation of the Silicon Track Trigger, a trigger processor for the \DZ silicon detector. We have shown that the proposed STT will be an indispensable asset in the search for the Higgs boson and technicolor in Run II. Furthermore we have demonstrated that it will improve the top mass measurement precision, as well as the precision of the measurement of the CP violation parameter $\sin 2 \beta$. We have also shown that it is a powerful tool to control trigger rates at L2. These rate reductions at L2 are important in maintaining high efficiency for many interesting physics channels with $b$ - quarks in the final state, but can indirectly also benefit other physics signals (without $b$ quarks) by freeing up bandwidth at L2. The STT is essential to allow the full exploitation of the \DZ detector's physics potential in run II.