I'm encountering an error while trying to install torch-scatter 2.0.9 in a docker container. I'm using the Nvidia PyTorch 20.11 container image (nvcr.io/nvidia/pytorch:20.11-py3), which has Python 3.6.10, PyTorch 1.8.0, CUDA 11.1, and gcc 7.5.0 installed. Here is most of the error message (it exceeded the maximum length):
Building wheels for collected packages: torch-scatter
Building wheel for torch-scatter (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /opt/conda/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-11559kum/torch-scatter/setup.py'"'"'; __file__='"'"'/tmp/pip-install-11559kum/torch-scatter/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-niewfydy
cwd: /tmp/pip-install-11559kum/torch-scatter/
Complete output (318 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/torch_scatter
copying torch_scatter/segment_csr.py -> build/lib.linux-x86_64-3.6/torch_scatter
copying torch_scatter/__init__.py -> build/lib.linux-x86_64-3.6/torch_scatter
copying torch_scatter/utils.py -> build/lib.linux-x86_64-3.6/torch_scatter
copying torch_scatter/segment_coo.py -> build/lib.linux-x86_64-3.6/torch_scatter
copying torch_scatter/scatter.py -> build/lib.linux-x86_64-3.6/torch_scatter
copying torch_scatter/placeholder.py -> build/lib.linux-x86_64-3.6/torch_scatter
creating build/lib.linux-x86_64-3.6/torch_scatter/composite
copying torch_scatter/composite/__init__.py -> build/lib.linux-x86_64-3.6/torch_scatter/composite
copying torch_scatter/composite/softmax.py -> build/lib.linux-x86_64-3.6/torch_scatter/composite
copying torch_scatter/composite/std.py -> build/lib.linux-x86_64-3.6/torch_scatter/composite
copying torch_scatter/composite/logsumexp.py -> build/lib.linux-x86_64-3.6/torch_scatter/composite
running build_ext
building 'torch_scatter._segment_csr_cpu' extension
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/csrc
creating build/temp.linux-x86_64-3.6/csrc/cpu
gcc -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icsrc -I/opt/conda/lib/python3.6/site-packages/torch/include -I/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/opt/conda/include/python3.6m -c csrc/segment_csr.cpp -o build/temp.linux-x86_64-3.6/csrc/segment_csr.o -O2 -DAT_PARALLEL_OPENMP -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_segment_csr_cpu -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /opt/conda/lib/python3.6/site-packages/torch/include/torch/script.h:9:0,
from csrc/segment_csr.cpp:2:
/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/autograd/custom_function.h: In instantiation of ‘torch::autograd::variable_list torch::autograd::CppNode<T>::apply(torch::autograd::variable_list&&) [with T = GatherCSR; torch::autograd::variable_list = std::vector<at::Tensor>]’:
csrc/segment_csr.cpp:230:54: required from here
gcc -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icsrc -I/opt/conda/lib/python3.6/site-packages/torch/include -I/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/opt/conda/include/python3.6m -c csrc/cpu/segment_csr_cpu.cpp -o build/temp.linux-x86_64-3.6/csrc/cpu/segment_csr_cpu.o -O2 -DAT_PARALLEL_OPENMP -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_segment_csr_cpu -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
g++ -pthread -shared -B /opt/conda/compiler_compat -L/opt/conda/lib -Wl,-rpath=/opt/conda/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/csrc/segment_csr.o build/temp.linux-x86_64-3.6/csrc/cpu/segment_csr_cpu.o -L/opt/conda/lib/python3.6/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.6/torch_scatter/_segment_csr_cpu.so -s
building 'torch_scatter._segment_csr_cuda' extension
creating build/temp.linux-x86_64-3.6/csrc/cuda
gcc -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -Icsrc -I/opt/conda/lib/python3.6/site-packages/torch/include -I/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/include/python3.6m -c csrc/segment_csr.cpp -o build/temp.linux-x86_64-3.6/csrc/segment_csr.o -O2 -DAT_PARALLEL_OPENMP -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_segment_csr_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /opt/conda/lib/python3.6/site-packages/torch/include/torch/script.h:9:0,
from csrc/segment_csr.cpp:2:
/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/autograd/custom_function.h: In instantiation of ‘torch::autograd::variable_list torch::autograd::CppNode<T>::apply(torch::autograd::variable_list&&) [with T = GatherCSR; torch::autograd::variable_list = std::vector<at::Tensor>]’:
csrc/segment_csr.cpp:230:54: required from here
/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/autograd/custom_function.h: In instantiation of ‘torch::autograd::variable_list torch::autograd::CppNode<T>::apply(torch::autograd::variable_list&&) [with T = SegmentMaxCSR; torch::autograd::variable_list = std::vector<at::Tensor>]’:
csrc/segment_csr.cpp:230:54: required from here
/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/autograd/custom_function.h: In instantiation of ‘torch::autograd::variable_list torch::autograd::CppNode<T>::apply(torch::autograd::variable_list&&) [with T = SegmentMinCSR; torch::autograd::variable_list = std::vector<at::Tensor>]’:
csrc/segment_csr.cpp:230:54: required from here
/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/autograd/custom_function.h: In instantiation of ‘torch::autograd::variable_list torch::autograd::CppNode<T>::apply(torch::autograd::variable_list&&) [with T = SegmentMeanCSR; torch::autograd::variable_list = std::vector<at::Tensor>]’:
csrc/segment_csr.cpp:230:54: required from here
/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/autograd/custom_function.h: In instantiation of ‘torch::autograd::variable_list torch::autograd::CppNode<T>::apply(torch::autograd::variable_list&&) [with T = SegmentSumCSR; torch::autograd::variable_list = std::vector<at::Tensor>]’:
csrc/segment_csr.cpp:230:54: required from here
gcc -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -Icsrc -I/opt/conda/lib/python3.6/site-packages/torch/include -I/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/include/python3.6m -c csrc/cpu/segment_csr_cpu.cpp -o build/temp.linux-x86_64-3.6/csrc/cpu/segment_csr_cpu.o -O2 -DAT_PARALLEL_OPENMP -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_segment_csr_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/usr/local/cuda/bin/nvcc -DWITH_CUDA -Icsrc -I/opt/conda/lib/python3.6/site-packages/torch/include -I/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/include/python3.6m -c csrc/cuda/segment_csr_cuda.cu -o build/temp.linux-x86_64-3.6/csrc/cuda/segment_csr_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' --expt-relaxed-constexpr -O2 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_segment_csr_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_61,code=sm_61 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_52,code=sm_52 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -std=c++14
g++ -pthread -shared -B /opt/conda/compiler_compat -L/opt/conda/lib -Wl,-rpath=/opt/conda/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/csrc/segment_csr.o build/temp.linux-x86_64-3.6/csrc/cpu/segment_csr_cpu.o build/temp.linux-x86_64-3.6/csrc/cuda/segment_csr_cuda.o -L/opt/conda/lib/python3.6/site-packages/torch/lib -L/usr/local/cuda/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.6/torch_scatter/_segment_csr_cuda.so -s
building 'torch_scatter._version_cpu' extension
gcc -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icsrc -I/opt/conda/lib/python3.6/site-packages/torch/include -I/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/opt/conda/include/python3.6m -c csrc/version.cpp -o build/temp.linux-x86_64-3.6/csrc/version.o -O2 -DAT_PARALLEL_OPENMP -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_version_cpu -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/KernelFunction_impl.h:2:0,
from /opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/KernelFunction.h:239,
from /opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/dispatch/Dispatcher.h:4,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/jit/runtime/operator.h:6,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/jit/ir/ir.h:7,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/jit/api/method.h:5,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/jit/api/object.h:5,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/jit/frontend/tracer.h:9,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/autograd/generated/variable_factories.h:12,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:7,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/script.h:3,
from csrc/version.cpp:2:
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h: In instantiation of ‘std::decay_t<typename c10::guts::infer_function_traits<Functor>::type::return_type> c10::impl::call_functor_with_args_from_stack_(Functor*, c10::Stack*, std::index_sequence<INDEX ...>) [with Functor = c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<long int (*)(), long int, c10::guts::typelist::typelist<> >; bool AllowDeprecatedTypes = true; long unsigned int ...ivalue_arg_indices = {}; std::decay_t<typename c10::guts::infer_function_traits<Functor>::type::return_type> = long int; c10::Stack = std::vector<c10::IValue>; std::index_sequence<INDEX ...> = std::integer_sequence<long unsigned int>]’:
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:346:77: required from ‘std::decay_t<typename c10::guts::infer_function_traits<Functor>::type::return_type> c10::impl::call_functor_with_args_from_stack(Functor*, c10::Stack*) [with Functor = c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<long int (*)(), long int, c10::guts::typelist::typelist<> >; bool AllowDeprecatedTypes = true; std::decay_t<typename c10::guts::infer_function_traits<Functor>::type::return_type> = long int; c10::Stack = std::vector<c10::IValue>]’
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:396:79: required from ‘c10::impl::make_boxed_from_unboxed_functor<KernelFunctor, AllowDeprecatedTypes>::call(c10::OperatorKernel*, const c10::OperatorHandle&, c10::Stack*)::<lambda()> [with KernelFunctor = c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<long int (*)(), long int, c10::guts::typelist::typelist<> >; bool AllowDeprecatedTypes = true]’
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:396:90: required from ‘struct c10::impl::make_boxed_from_unboxed_functor<KernelFunctor, AllowDeprecatedTypes>::call(c10::OperatorKernel*, const c10::OperatorHandle&, c10::Stack*) [with KernelFunctor = c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<long int (*)(), long int, c10::guts::typelist::typelist<> >; bool AllowDeprecatedTypes = true; c10::Stack = std::vector<c10::IValue>]::<lambda()>’
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:388:38: required from ‘static void c10::impl::make_boxed_from_unboxed_functor<KernelFunctor, AllowDeprecatedTypes>::call(c10::OperatorKernel*, const c10::OperatorHandle&, c10::Stack*) [with KernelFunctor = c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<long int (*)(), long int, c10::guts::typelist::typelist<> >; bool AllowDeprecatedTypes = true; c10::Stack = std::vector<c10::IValue>]’
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/KernelFunction_impl.h:109:9: required from ‘static c10::KernelFunction c10::KernelFunction::makeFromUnboxedFunctor(std::unique_ptr<c10::OperatorKernel>) [with bool AllowLegacyTypes = true; KernelFunctor = c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<long int (*)(), long int, c10::guts::typelist::typelist<> >]’
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/KernelFunction_impl.h:173:114: required from ‘static c10::KernelFunction c10::KernelFunction::makeFromUnboxedRuntimeFunction(FuncType*) [with bool AllowLegacyTypes = true; FuncType = long int()]’
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_registration.h:519:72: required from ‘std::enable_if_t<(c10::guts::is_function_type<T>::value && (! std::is_same<FuncType, void(const c10::OperatorHandle&, std::vector<c10::IValue>*)>::value)), c10::RegisterOperators&&> c10::RegisterOperators::op(const string&, FuncType*, c10::RegisterOperators::Options&&) && [with FuncType = long int(); std::enable_if_t<(c10::guts::is_function_type<T>::value && (! std::is_same<FuncType, void(const c10::OperatorHandle&, std::vector<c10::IValue>*)>::value)), c10::RegisterOperators&&> = c10::RegisterOperators&&; std::__cxx11::string = std::__cxx11::basic_string<char>]’
csrc/version.cpp:25:79: required from here
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:326:22: warning: variable ‘num_ivalue_args’ set but not used [-Wunused-but-set-variable]
constexpr size_t num_ivalue_args = sizeof...(ivalue_arg_indices);
^~~~~~~~~~~~~~~
g++ -pthread -shared -B /opt/conda/compiler_compat -L/opt/conda/lib -Wl,-rpath=/opt/conda/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/csrc/version.o -L/opt/conda/lib/python3.6/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.6/torch_scatter/_version_cpu.so -s
building 'torch_scatter._version_cuda' extension
gcc -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -Icsrc -I/opt/conda/lib/python3.6/site-packages/torch/include -I/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/include/python3.6m -c csrc/version.cpp -o build/temp.linux-x86_64-3.6/csrc/version.o -O2 -DAT_PARALLEL_OPENMP -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_version_cuda -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/KernelFunction_impl.h:2:0,
from /opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/KernelFunction.h:239,
from /opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/dispatch/Dispatcher.h:4,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/jit/runtime/operator.h:6,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/jit/ir/ir.h:7,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/jit/api/method.h:5,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/jit/api/object.h:5,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/jit/frontend/tracer.h:9,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/autograd/generated/variable_factories.h:12,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:7,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/script.h:3,
from csrc/version.cpp:2:
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h: In instantiation of ‘std::decay_t<typename c10::guts::infer_function_traits<Functor>::type::return_type> c10::impl::call_functor_with_args_from_stack_(Functor*, c10::Stack*, std::index_sequence<INDEX ...>) [with Functor = c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<long int (*)(), long int, c10::guts::typelist::typelist<> >; bool AllowDeprecatedTypes = true; long unsigned int ...ivalue_arg_indices = {}; std::decay_t<typename c10::guts::infer_function_traits<Functor>::type::return_type> = long int; c10::Stack = std::vector<c10::IValue>; std::index_sequence<INDEX ...> = std::integer_sequence<long unsigned int>]’:
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:346:77: required from ‘std::decay_t<typename c10::guts::infer_function_traits<Functor>::type::return_type> c10::impl::call_functor_with_args_from_stack(Functor*, c10::Stack*) [with Functor = c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<long int (*)(), long int, c10::guts::typelist::typelist<> >; bool AllowDeprecatedTypes = true; std::decay_t<typename c10::guts::infer_function_traits<Functor>::type::return_type> = long int; c10::Stack = std::vector<c10::IValue>]’
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:396:79: required from ‘c10::impl::make_boxed_from_unboxed_functor<KernelFunctor, AllowDeprecatedTypes>::call(c10::OperatorKernel*, const c10::OperatorHandle&, c10::Stack*)::<lambda()> [with KernelFunctor = c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<long int (*)(), long int, c10::guts::typelist::typelist<> >; bool AllowDeprecatedTypes = true]’
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:396:90: required from ‘struct c10::impl::make_boxed_from_unboxed_functor<KernelFunctor, AllowDeprecatedTypes>::call(c10::OperatorKernel*, const c10::OperatorHandle&, c10::Stack*) [with KernelFunctor = c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<long int (*)(), long int, c10::guts::typelist::typelist<> >; bool AllowDeprecatedTypes = true; c10::Stack = std::vector<c10::IValue>]::<lambda()>’
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:388:38: required from ‘static void c10::impl::make_boxed_from_unboxed_functor<KernelFunctor, AllowDeprecatedTypes>::call(c10::OperatorKernel*, const c10::OperatorHandle&, c10::Stack*) [with KernelFunctor = c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<long int (*)(), long int, c10::guts::typelist::typelist<> >; bool AllowDeprecatedTypes = true; c10::Stack = std::vector<c10::IValue>]’
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/KernelFunction_impl.h:109:9: required from ‘static c10::KernelFunction c10::KernelFunction::makeFromUnboxedFunctor(std::unique_ptr<c10::OperatorKernel>) [with bool AllowLegacyTypes = true; KernelFunctor = c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<long int (*)(), long int, c10::guts::typelist::typelist<> >]’
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/KernelFunction_impl.h:173:114: required from ‘static c10::KernelFunction c10::KernelFunction::makeFromUnboxedRuntimeFunction(FuncType*) [with bool AllowLegacyTypes = true; FuncType = long int()]’
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/op_registration/op_registration.h:519:72: required from ‘std::enable_if_t<(c10::guts::is_function_type<T>::value && (! std::is_same<FuncType, void(const c10::OperatorHandle&, std::vector<c10::IValue>*)>::value)), c10::RegisterOperators&&> c10::RegisterOperators::op(const string&, FuncType*, c10::RegisterOperators::Options&&) && [with FuncType = long int(); std::enable_if_t<(c10::guts::is_function_type<T>::value && (! std::is_same<FuncType, void(const c10::OperatorHandle&, std::vector<c10::IValue>*)>::value)), c10::RegisterOperators&&> = c10::RegisterOperators&&; std::__cxx11::string = std::__cxx11::basic_string<char>]’
csrc/version.cpp:25:79: required from here
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:326:22: warning: variable ‘num_ivalue_args’ set but not used [-Wunused-but-set-variable]
constexpr size_t num_ivalue_args = sizeof...(ivalue_arg_indices);
^~~~~~~~~~~~~~~
g++ -pthread -shared -B /opt/conda/compiler_compat -L/opt/conda/lib -Wl,-rpath=/opt/conda/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/csrc/version.o -L/opt/conda/lib/python3.6/site-packages/torch/lib -L/usr/local/cuda/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.6/torch_scatter/_version_cuda.so -s
building 'torch_scatter._scatter_cpu' extension
gcc -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icsrc -I/opt/conda/lib/python3.6/site-packages/torch/include -I/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.6/site-packages/torch/include/TH -I/opt/conda/lib/python3.6/site-packages/torch/include/THC -I/opt/conda/include/python3.6m -c csrc/scatter.cpp -o build/temp.linux-x86_64-3.6/csrc/scatter.o -O2 -DAT_PARALLEL_OPENMP -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_scatter_cpu -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
csrc/scatter.cpp: In static member function ‘static torch::autograd::variable_list ScatterMean::forward(torch::autograd::AutogradContext*, torch::autograd::Variable, torch::autograd::Variable, int64_t, c10::optional<at::Tensor>, c10::optional<long int>)’:
csrc/scatter.cpp:135:30: error: no matching function for call to ‘at::Tensor::div_(at::Tensor&, const char [6])’
out.div_(count, "floor");
^
In file included from /opt/conda/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3:0,
from /opt/conda/lib/python3.6/site-packages/torch/include/ATen/Context.h:4,
from /opt/conda/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /opt/conda/lib/python3.6/site-packages/torch/include/torch/script.h:3,
from csrc/scatter.cpp:2:
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:685:12: note: candidate: at::Tensor& at::Tensor::div_(const at::Tensor&) const
Tensor & div_(const Tensor & other) const;
^~~~
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:685:12: note: candidate expects 1 argument, 2 provided
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:687:12: note: candidate: at::Tensor& at::Tensor::div_(c10::Scalar) const
Tensor & div_(Scalar other) const;
^~~~
/opt/conda/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:687:12: note: candidate expects 1 argument, 2 provided
In file included from /opt/conda/lib/python3.6/site-packages/torch/include/torch/script.h:9:0,
from csrc/scatter.cpp:2:
/opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/autograd/custom_function.h: In instantiation of ‘torch::autograd::variable_list torch::autograd::CppNode<T>::apply(torch::autograd::variable_list&&) [with T = ScatterMax; torch::autograd::variable_list = std::vector<at::Tensor>]’:
csrc/scatter.cpp:268:75: required from here
error: command 'gcc' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for torch-scatter
I've tried using the 20.12 container image, which has Python 3.8 installed, and I've tried both
pip install torch-scatter==2.0.9
and
pip install torch-scatter==2.0.9 -f https://data.pyg.org/whl/torch-1.8.0+cu111.html
to no avail. How might I resolve this issue?
If you don't need 2.0.9 exactly, you could easily install the wheel for torch-scatter 2.0.8 for your current PyTorch and CUDA setup.
Check the docs:
If you go through that, you will see they have 2.0.8 whl prepared for 1.8.0 pytorch and cuda 11.1, but not 2.0.9.
So, per docs, this should work for you:
pip install torch-scatter==2.0.8 -f https://data.pyg.org/whl/torch-1.8.0+cu111.htmlIf you specifically need 2.0.9, I'd check that link to see which other Pytorch and CUDA combo supports it. E.g., torch 1.9.1 cuda 11.1 does: https://data.pyg.org/whl/
Of course, that would require a different PyTorch version.
You are currently trying to build the wheel manually, which can be difficult for this package.