CUDA compilers have options for producing 32-bit or 64-bit PTX. What is the difference between these? Is it like for x86, NVidia GPUs actually have 32-bit and 64-bit ISAs? Or is it related to host code only?
1
There are 1 answers
Related Questions in CUDA
- CUDA matrix inversion
- How can I do a successful map when the number of elements to be mapped is not consistent in Thrust C++
- Subtraction and multiplication of an array with compute-bound in CUDA kernel
- Is there a way to profile a CUDA kernel from another CUDA kernel
- Cuda reduce kernel result off by 2
- CUDA is compatible with gtx 1660ti laptop GPU?
- How can I delete a process in CUDA?
- Use Nvidia as DMA devices is possible?
- How to runtime detect when CUDA-aware MPI will transmit through RAM?
- How to tell CMake to compile all cpp files as CUDA sources
- Bank Conflict Issue in CUDA Shared Memory Access
- NVIDIA-SMI 550.54.15 with CUDA Version: 12.4
- Using CUDA with an intel gpu
- What are the limits on CUDA printf arguments?
- Why do CUDA asynchronous errors occur? (occur on the linux OS)
Related Questions in NVCC
- Is there a way to enable `-Wconversion` in nvcc for the device code (the kernel code)?
- Compilation Error with Boost.Geometry and NVCC (CUDA 12.4): '__T0' does not name a type
- Compute and Data transfer not happening concurrently in cuda Streams on Iteration 2
- Missing functions for NVIDIA CUDA 12.3 CSR Formatted Sparse Linear Algebra
- On windows Server 2019 nvidia-smi cannot show the correct installed version of CUDA
- Error linking c++ shared library with CUDA
- CMake fails to link a shared library decorated with a suffix (like somelib.so.1). Is there any way around?
- Compiling cpp helper function with main.cu using NVCC compiler
- Register usage count of kernel different with and without -lineinfo flag
- CMake and Cuda separate compilation of class constructor on device fail during linking
- couldn't allocate an array of size 2116800 on device code
- L1/Texture cache enabling effect on nvcc(cuda 10.2) on jetson nano (maxwell architecture)
- clangd mangling system include paths
- How do I include Cutlass in a CuPy project?
- CondaValueError: invalid package specification: nvcc --version=10.1
Related Questions in PTX
- load value into upper/lower part of a register
- Why compile to cubin and not just to PTX?
- Why the distinction between WMMA and "just" MMA instructions?
- Does PTX (8.4) not cover smaller-shape WMMA instructions?
- Can I force certain computations to occur despite their result not being used in the kernel?
- Functions called by an Input in CUDA
- CUDA __shfl_down_sync does not work with __match_any_sync
- Using NVCC-generated PTX file in OpenCL
- The meaning of brackets around register in PTX assembly loads/stores
- Confusion about __cvta_generic_to_shared
- How to get instruction cost in NVIDIA GPU?
- How to compare AT&T-assembly-like sources (e.g. CUDA PTX)?
- Linking error when using NVIDIA's static PTX compiler library & -lpthreads
- Can I hint to CUDA that it should move a given variable into the L1 cache?
- What does --entry take in CUDA's PTX JIT compiler?
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Pointers are certainly the most obvious difference. 64 bit machine model enables 64-bit pointers. 64 bit pointers enable a variety of things, such as address spaces larger than 4GB, and unified virtual addressing. Unified virtual addressing in turn enables other things, such as GPUDirect Peer-to-Peer. The CUDA IPC API also depends on 64 bit machine model.
The x64 ISA is not completely different than the x86 ISA, it's mostly an extension of it. Those familiar with the x86 ISA will find the x64 ISA familiar, with natural extensions for 64-bits where needed. Likewise 64 bit machine model is an extension of the capabilities of the PTX ISA to 64-bits. Most PTX instructions work exactly the same way.
32 bit machine model can handle 64 bit data types (such as
doubleandlong long), so frequently there don't need to be any changes to properly written CUDA C/C++ source code to compile for 32 bit machine model or 64 bit machine model. If you program directly in PTX, you may have to account for the pointer size differences, at least.