README.txt 4.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122
  1. Texte für PDV-Webseiten:
  2. Abfrage:
  3. - Anwendungsgebiete (In welchen Gebieten, insb. aus Sicht der Physik, forschen wir?)
  4. Aktuell:
  5. - DAQ-Systems, data management and slow control for Neutrinophysics
  6. - Scientific computing for high-speed tomographgy in Photon science and material science
  7. - Technologien (Welche Technologien entwickeln wir?)
  8. s.u.
  9. - Projekte (Welche Projekte sollen auf der Website dediziert erwähnt werden?)
  10. Eine Liste aktuelle und abgeschlossenen Projekte
  11. https://ufo.kit.edu/dis/index.php/project/
  12. - Technologietransfer-Projekte (Gleiche Frage wie bei Projekten)
  13. Gibt es aktuell nicht / ist auch nichts absehbar
  14. - Infrastruktur (Welche Infrastruktur betreiben wir, die sich lohnt mit Bild auf der Website zu nennen?)
  15. Parallel computing lab
  16. - GPU Computing Cluster
  17. - Real-time Storage Systems
  18. - High-speed Imaging Systems
  19. DAQ+ slow control lab
  20. - FPGA-based DAQ systems
  21. - KATRIN test setup – A test system for the KATRIN Detector data acquisition system
  22. - NI based slow control systems
  23. Progress of detector technology in recent years enables largely increased temporal and spatial resolution of scientific experiments. This innovation has led to large amounts of data that need to be transferred, processed and analyzed. The PDV group aims to apply latest technologies in order to face the challenges of data-intensive sciences.
  24. Core Technologies:
  25. - high-throughput interconnects to link detector systems, computing nodes and archival
  26. - scientific GPU computing + hardware-aware programming and optimizations
  27. - IT infrastructure for scientific experiments and cloud-based services
  28. - Experimental control, data management and automation
  29. Results:
  30. Reviewing GPU architectures to build efficient back projection for parallel geometries
  31. https://link.springer.com/article/10.1007/s11554-019-00883-w
  32. 26.6.2019
  33. A survey of parallel architectures presented during the past 10 years.
  34. Similarities and differences between these architectures are analyzed and we highlight how specific features can be used to enhance the reconstruction performance. In particular, we build a performance model to find hardware hotspots and propose several optimizations to balance the load between texture engine, computational and special function units, as well as different types of memory maximizing the utilization of all GPU subsystems in parallel. We further show that targeting architecture-specific features allows one to boost the performance 2–7 times compared to the current state-of-the-art algorithms used in standard codes.
  35. Balancing Load of GPU Subsystems to Accelerate Image Reconstruction in Parallel Beam Tomography
  36. https://ieeexplore.ieee.org/document/8645862
  37. 21.2.2019
  38. How to implement the algorithm on nowadays GPGPU architectures efficiently?
  39. We present two highly optimized algorithms to perform back projection on parallel hardware. One is relying on the texture engine to perform reconstruction, while another one utilizes the core computational units of the GPU. Both methods outperform current state-of-the-art techniques found in the standard reconstructions codes significantly. Finally, we propose a hybrid approach combining both algorithms to better balance load between GPU subsystems. It further boosts the performance by about 30% on NVIDIA Pascal micro-architecture.
  40. Investigation of the flow structure in thin polymer films using 3D µPTV enhanced by GPU
  41. Digital visual exploration library
  42. WAVE: A 3D Online Previewing Framework for Big Data Archives.
  43. Parasitoid biology preserved in mineralized fossils
  44. The Common Data Acquisition Platform in the Helmholtz Association
  45. Evaluation of GPUs as a level-1 track trigger for the High-Luminosity LHC
  46. The NOVA project: maximizing beam time efficiency through synergistic analyses of SRμCT data.
  47. Real-time image-content-based beamline control for smart 4D X-ray imaging
  48. High-throughput data acquisition and processing for real-time x-ray imaging
  49. A scalable DAQ system with high-rate channels and FPGA- and GPU-Trigger for the dark matter experiment EDELWEISS-III
  50. KITcube – a mobile observation platform for convection studies deployed during HyMeX
  51. A unified energy footprint for simulation software
  52. Focal-plane detector system for the KATRIN experiment
  53. UFO: A Scalable GPU-based Image Processing Framework for On-line Monitoring
  54. NOVA Paper
  55. WAVE
  56. KITcube
  57. UFO Framework
  58. KATRIN Detector Paper
  59. HDRI Paper
  60. Auger Paper