A Silicon Track Trigger for the \DZ Experiment in Run~II -- \\ Further Physics Benefit Studies\\ \vspace{3cm} \Large Addendum to the proposal (P908)\\ submitted to the PAC in September 1998\\ \vspace{5cm} The \DZ Collaboration\\ 15 January 1999\\ \end{center} \vspace{0.15cm} \clearpage \normalsize \tableofcontents \clearpage \section {Introduction} Having a good trigger system is extremely important for a hadron collider experiment because the event rate from collisions is many orders of magnitude higher than the rate at which events can be recorded. After the upgrade of the Fermilab Tevatron collider, the collision rate in run II seen by the \DZ detector will be effectively equal to the beam crossing rate, i.e. several MHz. This is about a factor of $10^5$ higher than the tape writing speed which will be less than 50 Hz. The role of the trigger is to reject as much of the overwhelming ``minimum bias'' background of the ``typical'' events as possible while maintaining high efficiency for the ``interesting events'' that one wants to record for off-line analysis. Even after the reduction of unwanted events by the trigger, the recorded event sample is still dominated by background. For example, the total cross section for \tt\ production is about 6 pb, to be compared with the total \pp\ interaction cross section of about 80 mb, so even after a reduction of the background by a factor of $10^5$ the fraction of \tt\ events in the recorded event sample is at best 1 in $10^5$. In the upgraded \DZ detector, the rejection of background events is done by a three-stage trigger system, denoted level 1 (L1), level 2 (L2) and level 3 (L3). The L1 trigger has the task of reducing the event rate to 10 kHz, which is the maximum rate at which the information from the SMT (the silicon microstrip detector) can be digitized. The subsequent trigger levels then need to reduce the rate so that it fits into the bandwidth of the next stage: L2 is designed to be able to receive 10 kHz and has to reduce this input rate to an output (accept) rate of at most 1 kHz, which is the maximum rate that L3 can handle. The final triggering stage, L3, then has to match its output (accept) rate to the rate at which events can be recorded (20Hz). If the input rate offered to one of the triggering stages is close to the maximum admissible one, deadtime losses become important. Such deadtime losses effectively reduce the data taking efficiency indiscriminately for background and signal events. In practice we try to avoid running under such conditions because the deadtime losses can depend on luminosity, beam conditions (``cleanliness'' of the beam), and are difficult to measure and model. If the trigger rates become too high one chooses to prescale some of the triggers which has the same effect on the efficiency (i. e. loss of both background and signal events), but is better under control. For signals whose cross section is very small, it is clear that prescaling should be avoided at all costs. This can only be done if the trigger system has access to sufficiently many powerful tools which allow selective rejection of the unwanted background. In our proposal to the PAC \cite{pac1} we showed that the Silicon Track Trigger (STT) provides powerful tools for rate control by background rejection: due to its ability for $b$ - tagging at L2, it allows trigger rate reductions by substantial factors (two and higher) with negligible, or very small, loss in efficiency for a wide range of channels with $b$ quarks in the final state, e.g. top production (both \tt\ and single top), Higgs production with decay into $b \bar b$, as well as for $b$ - physics studies. Furthermore we argued that due to the better momentum resolution, trigger thresholds in $p_t$ would be sharpened, thus leading to a reduction in trigger rate. In the present document we elaborate on some of these issues and respond to the request of the PAC for a more quantitative assessment of the ultimate physics benefit of the STT in terms of measurements and discoveries. \section {Search for the Higgs Boson} \subsection{Higgs discovery prospects at the Tevatron} The most promising process in which to observe Higgs boson production at the Tevatron is associated production with vector bosons. In the recently held Higgs -- SUSY workshop at Fermilab \cite{hswkshop}, the following conclusions were reached about the prospects of finding the SM Higgs at the upgraded Tevatron:\cite{conway} \begin{itemize} \item there is no single golden discovery channel; combining all channels, and both experiments (CDF and \DZ), is crucial; \item both experiments need to {\bf optimize trigger efficiency, $m_{b \bar b}$ resolution, $b$ tagging efficiency.} \item implicitly, all studies assume that both experiments have $b$ - tagging at the trigger level. \end{itemize} Under these conditions: \begin{itemize} \item if there is no SM Higgs Boson, CDF and \DZ can exclude it at 95\% CL up to 120 GeV mass in Run II, and with 10 fb$^{-1}$ can extend exclusion up to 190 GeV \item if there is a SM Higgs Boson, with 30 fb$^{-1}$ it can be discovered at the 3 to 5 $\sigma $ level, up to 190 GeV mass. \end{itemize} Sensitivities for some of the final states are given in Table \ref{tab:Hsensit}\cite{conway, jesik}. \begin{table}[h] \begin{center} \caption{Higgs Boson production at the Tevatron: expected number of signal ($S$) and background ($B$) events and sensitivity for 1 fb$^{-1}$ of data; assumptions are: % CDF and \DZ combined data (Gaussian approximation in combination), improved (by 30\%) $m_{\bbbar}$ resolution and run II acceptance. Most of the numbers are from John Conway's summary talk\cite{conway}; the row ``alternate'' is from a \DZ study\cite{jesik}.} \label{tab:Hsensit} \vspace{2mm} \begin{tabular}{|c|c||c|c|c|c|c|} \hline & & \multicolumn{5}{c|}{Higgs Mass (GeV)}\\ Channel & quantity& 90 & 100 & 110 & 120 & 130 \\ \hline & $S$ & 2.5 & 2.2 & 1.9 & 1.2 & 0.6 \\ $\nu \bar \nu$\bb & $B$ & 10 & 9.3 & 8.0 & 6.5 & 4.8 \\ & \ssb & 0.8 & 0.7 & 0.7 & 0.5 & 0.3 \\ \hline & $S$ & 8.9 & 6.7 & 4.6 & 3.2 & 2.1 \\ $\nu \bar \nu$\bb & $B$ & 51 & 47 & 43 & 41 & 37 \\ (alternate) & \ssb & 1.2 & 1.0 & 0.7 & 0.5 & 0.3 \\ \hline & $S$ & 8.4 & 6.6 & 5.0 & 3.7 & 2.2 \\ $\ell \nu$\bb & $B$ & 48 & 52 & 48 & 49 & 42 \\ & \ssb & 1.2 & 0.9 & 0.7 & 0.5 & 0.3 \\ \hline & $S$ & 1.0 & 0.9 & 0.8 & 0.5 & 0.3 \\ $\ell^+\ell^-$\bb & $B$ & 3.6 & 3.1 & 2.5 & 1.8 & 1.1 \\ & \ssb & 0.5 & 0.5 & 0.5 & 0.4 & 0.3 \\ \hline & $S$ & 8.1 & 5.6 & 3.5 & 2.5 & 1.3 \\ $q \bar q $\bb & $B$ & 6800& 3600& 2800& 2300& 2000\\ & \ssb & 0.10& 0.09& 0.07& 0.05& 0.03\\ \hline \end{tabular} %\begin{tabular}{|c|c|c|c|c|c|c|} \hline % & & \multicolumn{5}{c|}{Higgs mass (GeV)} \\ %channel & quantity & 90 & 100 & 110 & 120 & 130 \\ \hline % & S & 3.2 & 2.9 & 2.5 & 1.6 & 0.8 \\ %$\nu \bar \nu$\bb & B & 10.0 & 9.3 & 8.0 & 6.5 & 4.8 \\ % &$S/\sqrt{B}$& 1.0 & 1.0 & 0.9 & 0.6 & 0.4 \\ \hline % & S & 10.9 & 8.6 & 6.5 & 4.8 & 2.9 \\ %$\ell \nu$\bb & B & 48 & 52 & 48 & 49 & 42 \\ % &$S/\sqrt{B}$& 1.2 & 0.9 & 0.7 & 0.5 & 0.3 \\ \hline % & S & 1.4 & 1.2 & 1.0 & 0.7 & 0.4 \\ %$\ell^+\ell^-$\bb & B & 3.6 & 3.1 & 2.5 & 1.8 & 1.1 \\ % &$S/\sqrt{B}$& 0.7 & 0.7 & 0.6 & 0.5 & 0.4 \\ \hline % & S & 8.1 & 5.6 & 3.5 & 2.5 & 1.3 \\ %$q \bar q $\bb & B & 6800 & 3600 & 2800 & 2300 & 2000 \\ % &$S/\sqrt{B}$& 0.10 & 0.09 & 0.07 & 0.05 & 0.03 \\ \hline %\end{tabular} \end{center} \end{table} \subsection{Higgs search in the channel $Z H \to \nu \bar \nu b \bar b$} This is a channel that we had not considered seriously in our previous studies. From the studies done for the recent Higgs -- SUSY workshop at Fermilab \cite{hswkshop} it became clear, however, that for standard-model Higgs masses below about 130 GeV, this channel contributes substantially to the significance in Higgs searches\cite{conway, jesik}, {\bf provided one has an efficient trigger for it.} This is illustrated in Table \ref{tab:Hsensit} which shows two sets of numbers for $S, B,$ and \ssb\ for this final state obtained by two different analyses\cite{jesik, weiming} using different cuts for rejection of the top quark background. Since the last PAC meeting we have generated 2000 such events and processed them through the full \DZ upgrade detector simulation (GEANT). Applying our run II trigger simulation to these events, we find that due to bandwidth constraints the best achievable trigger efficiency without the STT is 35\%. With the STT, the trigger efficiency is 80\%. The best standard trigger has the requirements \Etmiss$>40$~GeV (L1 and L2) and at least one level two jet with $E_T>10$~GeV. The best trigger when STT information is included requires at level one two jets, one satisfying $E_T>10$~GeV and another satisfying $E_T>7$~GeV and requires at level two at least two jets $E_T>20$~GeV and at least two tracks with impact parameter significance $S_b>2$. Thanks to the improvement in trigger efficiency, the luminosity required to obtain a given significance for the final states with neutrinos decreases by a factor of 0.68 with the STT. This implies that in this final state, a 110 GeV Higgs would give a 3$\sigma$ effect in 10 fb$^{-1}$ of data instead of 14 fb$^{-1}$ (for \DZ alone). %(Crudely calculated from the SHW log-scale plot.) In other words: the STT offers the only way to trigger on these events with good efficiency. Not having this trigger would seriously jeopardize our capability of observing the Higgs boson in one of the more promising final states. In this context it should also be mentioned that in run I, bandwidth limitations did not allow us to have an unprescaled dijet + missing $E_t$ trigger, which is the kind of trigger needed to collect events of this type. Extrapolating from this run I experience, it can be surmised that this would {\it a fortiori} also be the case in run II. Due to its superior \MET resolution the \DZ detector is expected to be better suited than CDF to trigger on such events. Not having the STT would mean giving up this advantage. \subsection {Higgs and technicolor searches in $j j b \bar b$ and $b \bar b b \bar b$ final states} These arise from $W H, Z H \to j j b \bar b$ \cite{anna, roco, valls}, $ \rho_T \to W + \pi_T \to j j b \bar b $ \cite{technic}, and $ h A \to b \bar b b \bar b$ \cite{carena, baertata}. The cross sections times branching ratios for these processes are given in Table~\ref{tab:sigmaH}. \begin{table}[h] \caption{Cross section times branching ratio and assumed masses for hadronic final states of SM Higgs, technicolor and MSSM Higgs production.} \label{tab:sigmaH} \begin{center} \begin{tabular}{|c|r|c|}\hline channel & $\sigma \times BR$ (fb) & masses (GeV)\\ \hline $W H \to j j b \bar b$ & 320 & $M_H = 100 $\\ $\rho_T \to j j b \bar b$ & 3400 & $M_{\rho_T} = 250,~ M_{\pi_T} = 110$\\ $h A \to b \bar b b \bar b$ & $10\cdot tan^2(\beta)$ &$ M_h = M_A = 100$\\ \hline \end{tabular} \end{center} \end{table} Although the QCD background is huge (typically $10^6$ fb), we will want to look for confirming signals in these channels if evidence for a Higgs signal has been observed in other channels. Also, for large enough $\tan \beta $, all of $h A$ production could lead to the final state with four $b$'s, so we {\sl must} have these events. The initial significance \ssb\ is discouraging for all of these. It should be kept in mind, however, that having these data will allow detailed studies to be done which may well lead to the development of new analysis techniques or improvement of old ones which can be used to improve the signal-to-background ratio. Our experience from the top analysis in run I has shown that this is indeed possible. Initially, the $ t \bar t \to all~jets$ seemed hopeless, but we managed to muster enough ingenuity in the development of new techniques that we had a $3\sigma$ effect in the end. At the recent Higgs-Susy workshop it was shown \cite{pushpa} that for $ W H \to \ell \nu b \bar b $, a multivariate analysis using neural nets gives a large increase in sensitivity (about a factor of 2 in significance). In the \DZ top analysis, the relative gain in sensitivity for the all-jets channel was nearly 4 times larger than in the lepton + jets channel. If we assume that the NN improvement ratio (alljets/lepton+jets) is the same for $W H$ as for top, then, given the result shown at the Higgs-SUSY workshop, we could see an order of magnitude improvement in sensitivity over what we now have. It is clear from the above that we do not want to throw away these events, even if at present we do not yet have an exact understanding of how we will extract the signal from the overwhelming background. Having these hadronic final state events may be crucial in confirming a Higgs discovery and will be indispensable in finding technicolor. Use of the STT to trigger on these events is crucial in maximizing efficiency for these signals (see Table~\ref{tab:efficH}). \begin{table}[h] \begin{center} \caption {Number of events per fb$^{-1}$ recorded for each of the three hadronic channels with and without the STT (based on analysis of ref.\cite{anna}, assuming a factor 4 improvement from multivariate analysis with neural networks for fixed background of 3800 events).} \label{tab:efficH} \vspace{2mm} \begin{tabular}{|c|c|c|}\hline & \multicolumn{2}{c|}{number of events per fb$^{-1}$} \\ channel & without STT & with STT \\ \hline $ W H +Z H $& 6 & 20 \\ $ \rho_T$ & 63 & 200 \\ $h A $ &$0.2\cdot \tan^2 \beta$ & $> 0.7 \cdot \tan^2\beta$ \\ \hline \end{tabular} \end{center} \end{table} \vspace{-5mm} \section{$Z \to b \bar b$} In our proposal P908 to Fermilab \cite{pac1} we showed that the STT would allow us to trigger on \Zbb\ events with a low trigger rate while maintaining an acceptable efficiency, while without the STT, the efficiency would be reduced by substantial factors (the precise value depending on the bandwidth that can be allocated to such a trigger). Without the STT, the only way to control the trigger rate would be to prescale (thus reducing efficiency) or to raise the trigger threshold -- which would cut into the signal and in addition distort the dijet spectrum, thus making it difficult to extract the \Zbb\ signal above the \bb\ dijet background. The ability to accumulate a sample of \Zbb\ events is extremely important: The \Zbb\ is {\sl the only state } where we have a known mass which is reconstructed from jets. Even though it applies directly only for $b$ jets, we will use this % (and it will be the only method) to put errors on our understanding of the jet energy scale for all jets (light quarks and gluons as well). If we want to do jet spectroscopy for top and/or Higgs, we {\sl need} to be able to see a \Zbb\ signal. Such a signal would have implications for both top-quark and Higgs-boson physics. It would serve to calibrate the jet energy scale, which currently gives rise to one of the largest systematic uncertainties in the top mass measurement \cite{mtop}, and to measure the $b$-tagging efficiency. It would also provide proof that we can see a \bb\ resonance and allow to measure the observed line shape. Furthermore, these events could also be used as a control sample with which to develop new jet energy algorithms with the aim of optimizing the energy resolution (by using both tracking and calorimeter information). This would be vital for the proposed searches for the decay $H\to\bbbar$ of the Higgs boson during Run II \cite{conway,tev2000}. It is encouraging that the CDF collaboration has recently observed a \Zbb\ signal in a data sample triggered on single muons from Run I \cite{CDF_Zbb}. In response to questions asked by the PAC, we have done further studies\cite{zbb2} to verify that a signal for \Zbb\ can be seen with the \DZ detector above the expected background from strong \bb\ production. We have used PYTHIA-generated \Zbb\ events and dijet background, smeared with \DZ detector resolution, and with the background normalized to measured cross sections %measured by the \DZ collaboration during Run I ($\approx240$ nb) \cite{b_xsec}. We then apply a number of kinematic selection cuts whose purpose is to improve the signal-to-background ratio. These cuts are similar to the ones used by CDF in their recent observation of a \Zbb\ signal\cite{CDF_Zbb} and are described in \cite{zbb2}. The STT allows us to control the background trigger rate by tuning the stringency of the displaced vertex requirement (see \cite{pac1, zbb1}). If we want to limit the trigger rate for the \Zbb\ trigger to 20 Hz, this corresponds to a signal trigger efficency of 20\%. %Assuming a trigger efficiency of 20\% (corresponding to a trigger rate of %20 Hz) With this trigger efficency, and assuming 50\% off-line $b$ tagging efficiency, we expect to reconstruct about 43{,}000 events from \Zbb\ decays and 570{,}000 from strong \bb\ production. (Note that the tagging efficiency expected in run II is actually higher than this, so this is a conservative estimate). Note that using a muon triggered sample as CDF has done in run I \cite{CDF_Zbb} would limit the number of reconstructed \Zbb\ events in Run II to about 900, over a background of about 10,000. The double-tagged and untagged spectra shown in figure~\ref{fig:RunII_exp} are populated with the number of events expected when using the STT. We find that the shape of the non-resonant \bb\ background is consistent with that of the (light quark, i.e. untagged) dijet background, so we can use the untagged diject spectrum as representative of the background (including the non-resonant \bb\ contribution). Subtraction of the normalized untagged spectrum yields the \Zbb\ signal shown in Fig.~\ref{fig:RunII_Zbb}. Performing a fit with a Gaussian curve to the background subtracted spectrum, we find a width ($\sigma $) of about 11.5 GeV, consistent with the resolution used in the generation of the signal. The statistical uncertainty of the mean of the Gaussian is found to be 0.28\%. From this we conclude that we can expect to be able to measure the \Zbb\ peak position to about ${1\over 3}$\% precision. This includes statistical uncertainties in the background subtraction and assumes that, in addition to a \Zbb\ signal trigger using the STT, we have a control trigger without the STT requirement prescaled to about the same rate as the signal trigger. \begin{figure}[p] \vspace{-1.5cm} \begin{center} \mbox{\psfig{figure=zbb_fig06a.ps,height=3.5in}} \caption{ Invariant \bb\ mass spectra for background ($\bullet$) and signal + background ($\circ$) expected from Run II; the number of events corresponds to the expected numbers of reconstructed events.} \label{fig:RunII_exp} \mbox{\psfig{figure=zbb_fig07a.ps,height=3.5in}} \caption{Background subtracted \bb\ invariant mass spectrum expected from Run II with superimposed fit.} \label{fig:RunII_Zbb} \end{center} \end{figure} \section{Studies of the top quark} In our proposal\cite{pac1} we have shown that the proposed trigger processor can be very useful in controlling the (background dominated) trigger rates both for \tt\ and single top production. It is difficult to translate this directly into a quantitative measure of an ultimate physics benefit, e.g. in terms of improvements in measurement precision. The reason for this difficulty lies in the fact that the final net gain due to the additional trigger processor will depend on other parameters, e.g. the instantaneous luminosity, the beam quality, the final trigger menu adopted by the collaboration (i.e. the bandwidth allocated to the various triggers); the latter may depend on requests for bandwidth for new physics topics of which we are presently not aware. Here we address one issue which can be quantified now: the improvement in the top mass measurement due to the better jet energy scale calibration mentioned in the previous section. We start out from our run I top mass measurement using \tt\ events with a lepton + jets final state \cite{mtop}. In our top mass determination, we found that the sources of systematic uncertainties are jet energy calibration, event generation (gluon radiation from initial and final state), detector simulation (detector noise, pileup, Monte Carlo sample sizes), and fit procedure. The jet energy scale in run I is known to $\pm $(2.5\%+0.5~GeV). The presently used method of $p_T$ balancing in dijet or $\gamma$+jet events is systematically limited at a precision of about 1\%. We hope that we might achieve 1.5\% uncertainty if we push the present method as far as we can. To extrapolate to run II, we make the following assumptions: \begin{itemize} \item integrated luminosity in Run II is 2 fb$^{-1}$ \item trigger, selection and reconstruction efficiency for top events in Run II is the same as in Run I \item all systematic errors except the jet energy calibration scale as $1/\sqrt{\int{\cal L}\mbox{d}t}$ \item with STT, the jet energy scale is determined from the \Zbb\ signal, as described in the previous section, i.e. to a precision of $\pm {1\over 3}$\%;\\ without STT, we use the best value we think we might be able to obtain with the present method, i.e. $\pm 1.5$\%. \item the jet scale uncertainty contribution to the top mass measurement error is obtained by scaling from the corresponding Run I uncertainties. \end{itemize} Table~\ref{tab:mtopmass} summarizes the top mass measurement uncertainties for Run I, as well as the values expected for Run II. We see that without the use of the \Zbb\ signal, the contribution from the jet energy scale calibration would dominate the systematic uncertainty. The \Zbb\ signal allows us to reduce this uncertainty dramatically. \begin{table}[h] \begin{center} \caption{Uncertainties on top mass measurement from lepton + jets channel, as obtained in run I, and as expected for run II, with and without improved jet energy scale using the \Zbb\ signal.} \label{tab:mtopmass} \vspace{1mm} \begin{tabular}{|l|c|c|c|} \hline & Run I & Run II & Run II \\ & & w/o \Zbb\ & w/ \Zbb\ \\ \hline integrated luminosity & 100 pb$^{-1}$& 2 fb$^{-1}$ & 2 fb$^{-1} $\\ \hline jet energy calibration & & & \\ uncertainty & 2.5\% + 0.5 GeV& 1.5\% & 0.3\% \\ \hline systematic errors on top & & & \\ mass from & & & \\ \quad jet energy calibration & 4.0 GeV & 2.2 GeV & 0.5 GeV \\ \quad event generation & 3.1 GeV & 0.7 GeV & 0.7 GeV \\ \quad detector simulation & 1.6 GeV & 0.4 GeV & 0.4 GeV \\ \quad fit procedure & 1.3 GeV & 0.3 GeV & 0.3 GeV \\ total systematic & 5.5 GeV & 2.3 GeV & 1.0 GeV\\ statistical & 5.6 GeV & 1.3 GeV & 1.3 GeV\\ total & 7.8 GeV & 2.7 GeV & 1.6 GeV\\ \hline \end{tabular} \end{center} \end{table} A preliminary study\cite{brian} of the top mass determination in the all-jets channel from Run I data finds that the statistical error is about 20 GeV, and the systematic uncertainty (dominated by the jet energy scale uncertainty) is about 6.5 GeV. For the extrapolation to run II, we use similar assumptions as in the lepton + jet case, with the additional assumption that (thanks to the SMT) the $b$-tagging efficiency for jets from top decay will be three times higher in run II than in run I. This then leads to an expected statistical error on the top mass from the all-jets channel of about 1.7 GeV and a total systematic error of 2.6 GeV without the \Zbb\ signal, and about 1 GeV with. Here again, the use of the \Zbb\ signal has a dramatic impact on the precision of the mass measurement. \section{B - Physics} In our original PAC proposal, as an example of a $B$ - physics measurement, we showed that the STT in conjunction with single muon and dimuon triggers would allow us to achieve 32\% efficiency for \bkspsi , \psimumu \ events, while without the STT the efficiency is at most 24\%. We also showed that the STT can be used to control the L2 trigger rate while preserving good signal efficiency. In response to the request from the PAC, we provide here information on the precision with which we expect to measure $\sin 2\beta$ in run II (see \cite{fs-bphysics, fs-bnsf} for details). Given the trigger efficiencies mentioned above, we estimate the number of fully reconstructed events in 2 fb$^{-1}$:\\ \begin{tabular}{lll} \tab \bkspsi , \psimumu & with STT: & 8500 events\\ \tab \bkspsi , \psimumu & without STT: & 6400 events\\ \end{tabular} %(For comparison, CDF quotes 10,000 \psimumu\ and 5,000 \psiee\ events for 2~fb$^{-1}$). \bigskip Then, assuming a net flavor tagging efficiency of 5\% (including mis-tagging) (see \cite{fs-bnsf}), we obtain the following $\sin 2 \beta$ errors:\\ \begin{tabular}{lll} \tab \bkspsi , \psimumu & without STT: & error = 0.168\\ \tab \bkspsi , \psimumu & with STT: & error = 0.146\\ \end{tabular} \bigskip Given that $\sin 2\beta$ is constrained between 0.56 and 0.94 at 95\% C.L. based on indirect measurements, that uncertainty would correspond to a direct measurement of CP violation with a significance of about 4$\sigma$. With the new preshower detectors installed on the inner face of the calorimeter, \DZ will also have the capability in Run II to identify and trigger on low pt electrons. Preliminary studies\cite{etrig} suggest that we could achieve similar trigger efficiency for the \psiee\ decay mode as for the muons. In that case, the $\sin 2\beta$ uncertainty would be further reduced to $\Delta (\sin 2 \beta) \approx 0.1 $. %and the \DZ measurement would become quite comparable to CDF expectations. These two determinations should be relatively uncorrelated and thus provide a strong test of CP violation. \section{Trigger rates} At the presentation to the PAC in October, we were asked to quantify the trigger rate reduction due to the better resolution of the SMT. As an example we show in Fig.~\ref{fig:elec} and Fig.~\ref{fig:dielec} the trigger rates vs luminosity for central electron triggers, with and without using the STT\cite{bornali}. It can be seen that use of the STT leads to a reduction of the trigger rate by about a factor of 2. Note that a reduction by a factor of two gives us the possibility to either lower the $p_t$ threshold or allow other triggers to be activated. \begin{figure}[p] \vspace{-1.5cm} \begin{center} \mbox{\epsfig{figure=el_c_3.eps,height=3.5in}} \caption{Rate vs luminosity for L2 central electron trigger, $p_t > 3$GeV, with and without use of STT.} \label{fig:elec} \vspace{4mm} \mbox{\epsfig{figure=diel_c_3.eps,height=3.5in}} \caption{Rate vs luminosity for L2 central dielectron trigger, $p_t > 3$GeV, with and without use of STT.} \label{fig:dielec} \end{center} \end{figure} \section{Conclusion} In this document we have provided further arguments to strengthen the physics case for the implementation of the Silicon Track Trigger, a trigger processor for the \DZ silicon detector. We have shown that the proposed STT will be an indispensable asset in the search for the Higgs boson and technicolor in Run II. Furthermore we have demonstrated that it will improve the top mass measurement precision, as well as the precision of the measurement of the CP violation parameter $\sin 2 \beta$. We have also shown that it is a powerful tool to control trigger rates at L2. These rate reductions at L2 are important in maintaining high efficiency for many interesting physics channels with $b$ - quarks in the final state, but can indirectly also benefit other physics signals (without $b$ quarks) by freeing up bandwidth at L2. The STT is essential to allow the full exploitation of the \DZ detector's physics potential in run II. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%% bibliography %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage \input{biblio} \end{document}