Phanikumar Pentyala
2017-12-12 05:39:50 UTC
Dear users and developers,
Currently I am using two Tesla K40m cards for my computational work on
quantum espresso (QE) suit http://www.quantum-espresso.org/. My GPU enabled
QE code running very slower than normal version. When I am submitting my
job on gpu it was showing some error that "A high-performance Open MPI
point-to-point messaging module was unable to find any relevant network
interfaces:
Module: OpenFabrics (openib)
Host: qmel
Another transport will be used instead, although this may result in
lower performance.
Is this the reason for diminishing GPU performance ??
I done installation by
1. ./configure --prefix=/home/xxxx/software/openmpi-2.0.4
--disable-openib-dynamic-sl --disable-openib-udcm --disable-openib-rdmacm"
because we don't have any Infiband adapter HCA in server.
2. make all
3. make install
Please correct me If I done any mistake in my installation or I have to use
Infiband adaptor for using Openmpi??
I read lot of posts in openmpi forum to remove above error while submitting
job, I added tag of "--mca btl ^openib" , still no use error vanished but
performance was same.
Current details of server are:
Server: FUJITSU PRIMERGY RX2540 M2
CUDA version: 9.0
openmpi version: 2.0.4 with intel mkl libraries
QE-gpu version (my application): 5.4.0
P.S: Extra information attached
Thanks in advance
Regards
Phanikumar
Research scholar
IIT Kharagpur
Kharagpur, westbengal
India
Currently I am using two Tesla K40m cards for my computational work on
quantum espresso (QE) suit http://www.quantum-espresso.org/. My GPU enabled
QE code running very slower than normal version. When I am submitting my
job on gpu it was showing some error that "A high-performance Open MPI
point-to-point messaging module was unable to find any relevant network
interfaces:
Module: OpenFabrics (openib)
Host: qmel
Another transport will be used instead, although this may result in
lower performance.
Is this the reason for diminishing GPU performance ??
I done installation by
1. ./configure --prefix=/home/xxxx/software/openmpi-2.0.4
--disable-openib-dynamic-sl --disable-openib-udcm --disable-openib-rdmacm"
because we don't have any Infiband adapter HCA in server.
2. make all
3. make install
Please correct me If I done any mistake in my installation or I have to use
Infiband adaptor for using Openmpi??
I read lot of posts in openmpi forum to remove above error while submitting
job, I added tag of "--mca btl ^openib" , still no use error vanished but
performance was same.
Current details of server are:
Server: FUJITSU PRIMERGY RX2540 M2
CUDA version: 9.0
openmpi version: 2.0.4 with intel mkl libraries
QE-gpu version (my application): 5.4.0
P.S: Extra information attached
Thanks in advance
Regards
Phanikumar
Research scholar
IIT Kharagpur
Kharagpur, westbengal
India