CUDA C Programming Guide - NVIDIA Developer

Transcription

CUDA C Programming GuideDesign GuidePG-02829-001 v11.6 March 2022

Changes from Version 11.3‣ Added Graph Memory Nodes.‣ Formalized Asynchronous SIMT Programming Model.CUDA C Programming GuidePG-02829-001 v11.6 ii

Table of ContentsChapter 1. Introduction. 11.1. The Benefits of Using GPUs.11.2. CUDA : A General-Purpose Parallel Computing Platform and Programming Model. 21.3. A Scalable Programming Model. 31.4. Document Structure. 5Chapter 2. Programming Model. 72.1. Kernels.72.2. Thread Hierarchy. 82.3. Memory Hierarchy.102.4. Heterogeneous Programming.112.5. Asynchronous SIMT Programming Model. 142.5.1. Asynchronous Operations. 142.6. Compute Capability. 15Chapter 3. Programming Interface.163.1. Compilation with NVCC. 163.1.1. Compilation Workflow. 173.1.1.1. Offline Compilation. 173.1.1.2. Just-in-Time Compilation.173.1.2. Binary Compatibility. 183.1.3. PTX Compatibility.183.1.4. Application Compatibility.183.1.5. C Compatibility.193.1.6. 64-Bit Compatibility.193.2. CUDA Runtime. 203.2.1. Initialization. 203.2.2. Device Memory. 213.2.3. Device Memory L2 Access Management. 243.2.3.1. L2 cache Set-Aside for Persisting Accesses.243.2.3.2. L2 Policy for Persisting Accesses. 243.2.3.3. L2 Access Properties.263.2.3.4. L2 Persistence Example.263.2.3.5. Reset L2 Access to Normal. 273.2.3.6. Manage Utilization of L2 set-aside cache. 283.2.3.7. Query L2 cache Properties.283.2.3.8. Control L2 Cache Set-Aside Size for Persisting Memory Access.28CUDA C Programming GuidePG-02829-001 v11.6 iii

3.2.4. Shared Memory. 293.2.5. Page-Locked Host Memory. 343.2.5.1. Portable Memory. 353.2.5.2. Write-Combining Memory. 353.2.5.3. Mapped Memory. 363.2.6. Asynchronous Concurrent Execution. 373.2.6.1. Concurrent Execution between Host and Device.373.2.6.2. Concurrent Kernel Execution.383.2.6.3. Overlap of Data Transfer and Kernel Execution. 383.2.6.4. Concurrent Data Transfers. 383.2.6.5. Streams. 383.2.6.6. CUDA Graphs. 433.2.6.7. Events. 513.2.6.8. Synchronous Calls. 523.2.7. Multi-Device System.523.2.7.1. Device Enumeration.523.2.7.2. Device Selection.523.2.7.3. Stream and Event Behavior. 523.2.7.4. Peer-to-Peer Memory Access. 533.2.7.5. Peer-to-Peer Memory Copy. 543.2.8. Unified Virtual Address Space. 543.2.9. Interprocess Communication. 553.2.10. Error Checking. 553.2.11. Call Stack. 563.2.12. Texture and Surface Memory. 563.2.12.1. Texture Memory. 573.2.12.2. Surface Memory.663.2.12.3. CUDA Arrays. 693.2.12.4. Read/Write Coherency. 693.2.13. Graphics Interoperability.703.2.13.1. OpenGL Interoperability.703.2.13.2. Direct3D Interoperability. 723.2.13.3. SLI Interoperability.783.2.14. External Resource Interoperability.783.2.14.1. Vulkan Interoperability.793.2.14.2. OpenGL Interoperability.863.2.14.3. Direct3D 12 Interoperability. 873.2.14.4. Direct3D 11 Interoperability. 93CUDA C Programming GuidePG-02829-001 v11.6 iv

3.2.14.5. NVIDIA Software Communication Interface Interoperability (NVSCI). 1003.2.15. CUDA User Objects. 1063.3. Versioning and Compatibility.1073.4. Compute Modes. 1093.5. Mode Switches. 1103.6. Tesla Compute Cluster Mode for Windows. 110Chapter 4. Hardware Implementation. 1114.1. SIMT Architecture. 1114.2. Hardware Multithreading.113Chapter 5. Performance Guidelines.1145.1. Overall Performance Optimization Strategies.1145.2. Maximize Utilization. 1145.2.1. Application Level.1145.2.2. Device Level. 1155.2.3. Multiprocessor Level. 1155.2.3.1. Occupancy Calculator. 1175.3. Maximize Memory Throughput.1195.3.1. Data Transfer between Host and Device. 1195.3.2. Device Memory Accesses.1205.4. Maximize Instruction Throughput. 1245.4.1. Arithmetic Instructions.1245.4.2. Control Flow Instructions. 1305.4.3. Synchronization Instruction. 1305.5. Minimize Memory Thrashing. 130Appendix A. CUDA-Enabled GPUs.132Appendix B. C Language Extensions.133B.1. Function Execution Space Specifiers. 133B.1.1. global .133B.1.2. device . 133B.1.3. host .133B.1.4. Undefined behavior. 134B.1.5. noinline and forceinline .134B.2. Variable Memory Space Specifiers.135B.2.1. device . 135B.2.2. constant . 135B.2.3. shared . 135B.2.4. managed . 136CUDA C Programming GuidePG-02829-001 v11.6 v

B.2.5. restrict .137B.3. Built-in Vector Types.138B.3.1. char, short, int, long, longlong, float, double.138B.3.2. dim3. 139B.4. Built-in Variables. 139B.4.1. gridDim. 139B.4.2. blockIdx.140B.4.3. blockDim.140B.4.4. threadIdx.140B.4.5. warpSize. 140B.5. Memory Fence Functions.140B.6. Synchronization Functions. 143B.7. Mathematical Functions.144B.8. Texture Functions.144B.8.1. Texture Object API. 145B.8.1.1. tex1Dfetch(). 145B.8.1.2. tex1D(). 145B.8.1.3. tex1DLod(). 145B.8.1.4. tex1DGrad(). 145B.8.1.5. tex2D(). 145B.8.1.6. tex2DLod(). 145B.8.1.7. tex2DGrad(). 146B.8.1.8. tex3D(). 146B.8.1.9. tex3DLod(). 146B.8.1.10. tex3DGrad(). 146B.8.1.11. tex1DLayered(). 146B.8.1.12. tex1DLayeredLod().146B.8.1.13. tex1DLayeredGrad(). 147B.8.1.14. tex2DLayered(). 147B.8.1.15. tex2DLayeredLod().147B.8.1.16. tex2DLayeredGrad(). 147B.8.1.17. texCubemap(). 147B.8.1.18. texCubemapLod().147B.8.1.19. texCubemapLayered().148B.8.1.20. texCubemapLayeredLod(). 148B.8.1.21. tex2Dgather(). 148B.8.2. Texture Reference API.148B.8.2.1. tex1Dfetch(). 148CUDA C Programming GuidePG-02829-001 v11.6 vi

B.8.2.2. tex1D(). 149B.8.2.3. tex1DLod(). 149B.8.2.4. tex1DGrad(). 149B.8.2.5. tex2D(). 150B.8.2.6. tex2DLod(). 150B.8.2.7. tex2DGrad(). 150B.8.2.8. tex3D(). 150B.8.2.9. tex3DLod(). 151B.8.2.10. tex3DGrad(). 151B.8.2.11. tex1DLayered(). 151B.8.2.12. tex1DLayeredLod().151B.8.2.13. tex1DLayeredGrad(). 152B.8.2.14. tex2DLayered(). 152B.8.2.15. tex2DLayeredLod().152B.8.2.16. tex2DLayeredGrad(). 152B.8.2.17. texCubemap(). 153B.8.2.18. texCubemapLod().153B.8.2.19. texCubemapLayered().153B.8.2.20. texCubemapLayeredLod(). 153B.8.2.21. tex2Dgather(). 154B.9. Surface Functions.154B.9.1. Surface Object API.154B.9.1.1. surf1Dread(). 154B.9.1.2. surf1Dwrite. 154B.9.1.3. surf2Dread(). 155B.9.1.4. surf2Dwrite(). 155B.9.1.5. surf3Dread(). 155B.9.1.6. surf3Dwrite(). 155B.9.1.7. surf1DLayeredread().156B.9.1.8. surf1DLayeredwrite(). 156B.9.1.9. surf2DLayeredread().156B.9.1.10. surf2DLayeredwrite(). 156B.9.1.11. surfCubemapread().157B.9.1.12. surfCubemapwrite().157B.9.1.13. surfCubemapLayeredread(). 157B.9.1.14. surfCubemapLayeredwrite().157B.9.2. Surface Reference API. 158B.9.2.1. surf1Dread(). 158CUDA C Programming GuidePG-02829-001 v11.6 vii

B.9.2.2. surf1Dwrite. 158B.9.2.3. surf2Dread(). 158B.9.2.4. surf2Dwrite(). 158B.9.2.5. surf3Dread(). 159B.9.2.6. surf3Dwrite(). 159B.9.2.7. surf1DLayeredread().159B.9.2.8. surf1DLayeredwrite(). 159B.9.2.9. surf2DLayeredread().160B.9.2.10. surf2DLayeredwrite(). 160B.9.2.11. surfCubemapread().160B.9.2.12. surfCubemapwrite().160B.9.2.13. surfCubemapLayeredread(). 161B.9.2.14. surfCubemapLayeredwrite().161B.10. Read-Only Data Cache Load Function. 161B.11. Load Functions Using Cache Hints. 161B.12. Store Functions Using Cache Hints. 162B.13. Time Function. 162B.14. Atomic Functions. 163B.14.1. Arithmetic Functions.164B.14.1.1. atomicAdd(). 164B.14.1.2. atomicSub(). 165B.14.1.3. atomicExch().165B.14.1.4. atomicMin(). 165B.14.1.5. atomicMax().166B.14.1.6. atomicInc(). 166B.14.1.7. atomicDec(). 166B.14.1.8. atomicCAS().166B.14.2. Bitwise Functions.167B.14.2.1. atomicAnd(). 167B.14.2.2. atomicOr(). 167B.14.2.3. atomicXor(). 167B.15. Address Space Predicate Functions.168B.15.1. isGlobal(). 168B.15.2. isShared(). 168B.15.3. isConstant().168B.15.4. isLocal().168B.16. Address Space Conversion Functions. 168B.16.1. cvta generic to global(). 168CUDA C Programming GuidePG-02829-001 v11.6 viii

B.16.2. cvta generic to shared(). 169B.16.3. cvta generic to constant(). 169B.16.4. cvta generic to local().

CUDA C Programming Guide PG-02829-001_v11.6 ii Changes from Version 11.3 ‣ Added Graph Memory Nodes. ‣ Formali