HARDWARE 5.1 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788
  1. BIOS
  2. ====
  3. The important options in BIOS:
  4. - IOMMU (Intel VT-d) - Enable hardware translation between physcal and bus addresses
  5. - No Snoop - Disables hardware cache coherency between DMA and CPU
  6. - Max Payload (MMIO Size) - Maximal (useful) payload for PCIe protocol
  7. - Above 4G Decoding - This seesm to allow bus addresses wider than 32-bit
  8. - Memory performance - Frequency, Channel-interleaving, Hardware prefetcher affect memory performance
  9. IOMMU
  10. =====
  11. - As many PCI-devices can address only 32-bit memory, for DMA operation some address
  12. translation mechanism is required (also it helps with security limiting PCI devices
  13. to only allowed address range). There are several methods to achieve this.
  14. * Linux provides so called Bounce Buffers (or SWIOTLB). This is just a small memory
  15. buffer in the lower 4 GB of memory. The DMA is actually performed into this buffer
  16. and data is, then, copied to the appropriate location. One problem with SWIOTLB
  17. is that it does not gurantee 4K aligned address when mapping memory pages (to
  18. optimally use space). This is not properly supported neither by NWLDMA nor by IPEDMA.
  19. * Alternatively hardware IOMMU can be used which will provide hardware address
  20. translation between physical and bus addresses. To allow it, we need to
  21. allow the technology in the BIOS and in the kernel.
  22. + Intel VT-d or AMD-Vi (AMD IOMMU) virtualization technologies have to be enabled
  23. + Intel is enabled with "intel_iommu=on" kernel parameter (alternative is to build kernel with CONFIG_INTEL_IOMMU_DEFAULT_ON)
  24. + Checking: dmesg | grep -e IOMMU -e DMAR -e PCI-DMA
  25. DMA Cache Coherency
  26. ===================
  27. DMA API distinguishes two types of memory coherent and non-coherent.
  28. - For the coherent memory, the hardware will care for cache consistency. This is often
  29. achieved by snooping (No Snoop should be disabled in the BIOS). Alternatively, the same
  30. effect can be achieved by using non-cached memory. There is architectures with 100%
  31. cache coherent memory and others where only part of memory is kept cache coherent.
  32. For such architectures the coherent memory can be allocated with
  33. dma_alloc_coheretnt(...) / dma_alloc_attrs(...)
  34. * However, the coherent memory could be slow (especially on large SMP systems). Also
  35. minimal allocation unit may be restricted to page. Therefore, it is useful to group
  36. consistent mapping into the groups.
  37. - On other hand, it is possible to allocate streaming DMA memory which are synchronized
  38. using:
  39. pci_dma_sync_single_for_device / pci_dma_sync_single_for_cpu
  40. - It may happen that all memory is coherent anyway and we do not need to call this 2
  41. functions. Currently, it seems not required on x86_64 which may indicate that snooping
  42. is performed for all available memory. On other hand, may be only because nothing
  43. was get cached luckely so far.
  44. PCIe Payload
  45. ============
  46. - Kind of MTU for PCI protocol. Higher the value, the lower will be slow down due to
  47. protocol headers while streaming large amount of data. The current values can be checked
  48. with 'lspci -vv'. For each device, there is 2 values:
  49. * MaxPayload under DevCap which indicates MaxPayload supported by the dvice
  50. * MaxPayload under DevCtl indicates MaxPayload negotiated between device and chipset.
  51. Negotiated MaxPayload is a minimal value among all the infrastructure between the device
  52. chipset. Normally, it is limited by the MaxPaylod supported by the PCIe root port on
  53. the chipset. Most systems currently restricted to 256 bytes.
  54. Memory Performance
  55. ==================
  56. - Memory performance is quite critical as we currently tripple the PCIe bandwidth:
  57. DMA writes to memory, we read memory (it is not in cache), we write memory.
  58. - The most important to enable Channel Interleaving (otherwise a single-channel copy
  59. will be performed). On other hand, Rank Interleaving does not matter much.
  60. - On some motherboards (Asrock X79 for instance), when the memory speed is set
  61. manually, the interleaving is switched off in AUTO mode. So, it is safer to set
  62. interleaving manually on.
  63. - Hardware prefetching helps a little bit and should be turned on
  64. - Faster memory frequency helps. As we are streaming I guess this is more important
  65. compared even to slighly higher CAS & RAS latencies, but I have not checked.
  66. - The memory bank conflicts sometimes may significant harm performance. Bank conflict
  67. will happen if we read and write from/to different rows of the same bank (also there
  68. could be conflict with DMA operation). I don't have a good idea how to prevent this
  69. now.
  70. - The most efficient memcpy performance depends on CPU generation. For latest models,
  71. AVX seems to be most efficient. Filling all AVX registers before writting increases
  72. performance. It also gives quite much of performance, if multiple pages copied in
  73. parallel (still first we reading from multiple pages and then writting to multiple
  74. pages, see ssebench).
  75. - Usage of HugePages makes performance more stable. Using page-locked memory does not
  76. help at all.
  77. - This still will give about 10 - 15 GB/s at max. On multiprocessor systems about 5 GB/s,
  78. because of performance penalties due to snooping. Therefore, copying with multiple
  79. threads is preferable.