|
@@ -14,7 +14,7 @@
|
|
|
|
|
|
\title{A high-throughput readout architecture based on PCI-Express Gen3 and DirectGMA technology}
|
|
|
|
|
|
-\author{M.~Vogelgesang$^a$, % HAHAHA ... let's wait for Michele to change that
|
|
|
+\author{M.~Vogelgesang$^a$,
|
|
|
L.~Rota$^a$,
|
|
|
N.~Zilio,
|
|
|
M.~Caselle$^a$,
|
|
@@ -218,8 +218,6 @@ develop custom applications written in C or high-level languages such as Python.
|
|
|
|
|
|
\section{Results}
|
|
|
|
|
|
-% LR: The first sentence is more than obvious, and "experiment", woah!
|
|
|
-% MV: You are an idiot. But sorry, I again stated the obvious.
|
|
|
We carried out performance measurements on a machine with an Intel Xeon E5-1630
|
|
|
at 3.7 GHz, Intel C612 chipset running openSUSE 13.1 with Linux 3.11.10. The
|
|
|
Xilinx VC709 evaluation board was plugged into one of the PCIe 3.0 x8 slots.
|
|
@@ -321,23 +319,6 @@ non-Gaussian distribution with two distinct peaks indicates a systemic influence
|
|
|
that we cannot control and is most likely caused by the non-deterministic
|
|
|
run-time behaviour of the operating system scheduler.
|
|
|
|
|
|
-%% Here: instead of this useless plot, we can plot the latency vs different data
|
|
|
-%% sizes transmitted (from FPGA). It should reach 50% less for large data
|
|
|
-%% transfers, even with our current limitation... Maybe we can also try on a normal
|
|
|
-%% desktop?
|
|
|
-
|
|
|
-
|
|
|
-% \begin{figure}
|
|
|
-% \centering
|
|
|
-% \includegraphics[width=0.6\textwidth]{figures/latency}
|
|
|
-% \caption{%
|
|
|
-% For data transfers larger than XX MB, latency is decreased by XXX percent with respect to the traditional approach (a) by using our implementation (b).
|
|
|
-% }
|
|
|
-% \label{fig:latency}
|
|
|
-% \end{figure}
|
|
|
-
|
|
|
-% In case everything is fine.
|
|
|
-
|
|
|
%% EMERGENCY TEXT if we don't manage to fix the latency problem
|
|
|
% The round-trip time of a memory read request issued from the CPU to the FPGA is
|
|
|
% less then 1 $\mu$s. Therefore, the current performance bottleneck lies in the
|