List of mkl_solver* libraries are deprecated libraries since version 10.2 Update 2

Taken from mkl_solver* libraries are deprecated libraries since version 10.2 Update 2

Since version 10.2 update 2 of Intel® MKL,

all components of Direct Solver (Pardiso and DSS), Trust-Region (TR) Solver, Iterative Sparse Solver (ISS) and GNU Multiple Precision (GMP) were moved to standard MKL libraries.

So now solver ( e.g: mkl_solver.lib and mkl_solver_sequential.lib for IA32 ) libraries are

empty (for backward compatibility).

The list of deprecated libraries are the following:

Intel® MKL for Linux:

lib/32/libmkl_solver.a
lib/32/libmkl_solver_sequential.a
lib/em64t/libmkl_solver_ilp64.a
lib/em64t/libmkl_solver_ilp64_sequential.a
lib/em64t/libmkl_solver_lp64.a
lib/em64t/libmkl_solver_lp64_sequential.a
lib/ia64/libmkl_solver_ilp64.a
lib/ia64/libmkl_solver_ilp64_sequential.a
lib/ia64/libmkl_solver_lp64.a
lib/ia64/libmkl_solver_lp64_sequential.a

Therefore, the updated linking line will look like:

Linking on Intel®64:

static linking:

ifort pardiso.f -L$MKLPATH -I$MKLINCLUDE \
-Wl,–start-group \
$MKLPATH/libmkl_intel_lp64.a $MKLPATH/libmkl_intel_thread.a $MKLPATH/libmkl_core.a \
-Wl,–end-group -liomp5 -lpthread

dynamic linking:

ifort pardiso.f -L$MKLPATH -I$MKLINCLUDE \
-lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread

where:
In these examples,

MKLPATH=$MKLROOT/lib/em64t
MKLINCLUDE=$MKLROOT/include.

Compiling CPMD-3.17.1 with Intel-13.0.1.117 and OpenMPI-1.8.3

I’m assuming you have compiled OpenMPI with Intel Compiler. If you are not sure, you can look at Blog Entry
Compiling OpenMPI 1.6.5 with Intel 12.1.5 on CentOS 6

To get the source code from CPMD, please go to http://www.cpmd.org/

Step 1: From the CPMD Directory

cd ~/CPMD-3.13.2/SOURCE
./mkconfig.sh IFORT-AMD64-MPI > Makefile

Step 2: I’m using CentOS 6 internal Blas, lapack and atlas. Make sure your configure the one below.

#--------------- Default Configuration for IFORT-AMD64-MPI ---------------
SRC  = .
DEST = .
BIN  = .
FFLAGS = -pc64  -tpp6 -O2 -unroll
#LFLAGS =  -L. -latlas_x86_64
LFLAGS =  -L/usr/lib64/atlas -llapack -lblas
CFLAGS = -O2 -Wall -m64
CPP = /lib/cpp -P -C -traditional
CPPFLAGS = -D__Linux -D__PGI -DFFT_DEFAULT -DPOINTER8 -DLINUX_IFC \
-DPARALLEL
NOOPT_FLAG =
CC = mpicc
FC = mpif77 -c
LD = mpif77 -i-static
AR = ar
#----------------------------------------------------------------------------

Step 3: Compile CPMD

# make

If the compilation succeed, it should generate a cpmd.x executable.

Step 4: Pathing
Make sure your $PATH reflect the path of the executable cpmd.x. It is also important to ensure that you check that the libraries are properly linked to the executable

# ldd cpmd.x

Step 5: Test your executable. You have to go to CPMD Consortium to download the cpmd-test.tar.gz for testing.

LSF retained the original Max Locked Memory and not the updated one

The value of “max locked memory” has been modified at the operating system level, but LSF still returns the original value.

Symptoms before updating max locked memory

[user1@cluster-h00 ~]$ bsub -m compute-node1 -I ulimit -a
Job <32400> is submitted to default queue <normal>.
<<Waiting for dispatch ...>>
<<Starting on compute-node1>>
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1027790
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 4096
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1027790
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

To resolve this issue,

# badmin hshutdown
# badmin hstartup
[user1@cluster-h00 ~]$ bsub -q gpgpu -m compute-node1 -I ulimit -a
Job <32490> is submitted to queue <gpgpu>.
<<Waiting for dispatch ...>>
<<Starting on compute-node1>>
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 515133
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 4096
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 515133
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

References:

  1. LSF does not recognize that “max locked memory” has been updated

Algorithm negotiation failed for SSH Secure Shell Client

If you are using the dated SSH Secure Shell Client 3.2.9, you may have issue connect to the more updated OpenSSH Server.

SSH

If you cannot change the client (which is recommended), you will have to update the OpenSSH Server on Linux. Add this in

# vim /etc/ssh/sshd_config
# Ciphers
Ciphers aes128-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,arcfour
KexAlgorithms diffie-hellman-group1-sha1

*If you are using Centrify-OpenSSH, you have to modify /etc/centrifydc/ssh/sshd_config and do the same

References:

  1. Bug 1228013 – Server responded “Algorithm negotiation failed”

Enable Centrify Agent to read UID and GID from Centrify DirectManage Access Manager

We purchased Centrify Standard and setup the DirectManage Access Manager. Next we proceed to install the client agent on the compute node.

After unpacking and installing the agent, when we do a

# getent passwd  |grep kittycool
kittycool:x:1304567321211:1304567321211:kittycool:/home/kittycool:/bin/bash
kittycool:x:10001:10001:kittycool:/home/kittycool:/bin/bash

Apparently, the getent passwd |grep kittycool is pulling both the Active Directory UID and the DirectManage Access and the user UID differs

To resolve this issue, you need to specify the zone which is used by DirectManage Access Manager, so your UID of the user will pick from the DirectManage Access Manager.

# adjoin -z cluster -u OU_Administrator  staff.mycompany.com.sg -c "staff.mycompany.com.sg/HPC/Computers"

To check it is displaying the correct UID and GID,

# getent passwd  |grep kittycool
kittycool:x:10001:10001:kittycool:/home/kittycool:/bin/bash

Compiling and Installing mfix-2016 with Intel MPI 5.0.3

The document to install MFIX can be found at https://mfix.netl.doe.gov/download/mfix/mfix_current_documentation/mfix_user_guide.pdf

We compiled using Intel 15.0.6 and Intel-MPI-5.0.3. Once done, you can easily compile with the following configuration parameters

# ./configure FC=mpif90 FCFLAGS='-g -O2' --prefix=/usr/local/mfix-2016.1_impi --enable-dmp
# make -j 16
# make install

Copy the libmfix.a to /usr/local/mfix-2016.1_impi

# mkdir /usr/local/mfix-2016.1_impi/lib
# cp libmfix.a /usr/local/mfix-2016.1_impi/lib

Compiling FDS-SMV with Intel MPI-5.0.3 on CentOS 6

Download the FDS Development Zip from fds-smv GitHub

Step 1: Unzip the fds-development.zip

$ unzip fds-development.zip

Step 2: Update .bashrc

.....
.....
export IFORT_COMPILER=/usr/local/intel_2015/composerxe/bin
#FDS environment -----------------------
export MPIDIST_ETH=/usr/local/intel_2015/impi/5.0.3.049/bin64
export MPIDIST_IB=/usr/local/intel_2015/impi/5.0.3.049/bin64
source ~/.bashrc_fds
#FDS -----------------------------------
.....
.....

Step 3: Initialise Intel Compilers

# touch /etc/profile.d/intel.sh
# vim intel.sh
source /usr/local/intel_2015/composerxe/bin/compilervars.sh intel64

Step 4: Compilation

# cd /home/user1/Downloads/fds-smv-development/FDS_Compilation/mpi_intel_linux_64ib
# ./make_fds.sh

Step 5: Test

# ./fds_mpi_intel_linux_64ib

Fire Dynamics Simulator

 Current Date     : July 21, 2016  13:13:56
 Version          : FDS 6.5.1
 Revision         :
 Revision Date    :
 Compilation Date : Jul 21, 2016  12:16:33

 MPI Enabled; Number of MPI Processes:          1
 OpenMP Enabled; Number of OpenMP Threads:   4

 MPI version: 3.0
 MPI library version: Intel(R) MPI Library 5.0 Update 3 for Linux* OS


 Consult FDS Users Guide Chapter, Running FDS, for further instructions.

 Hit Enter to Escape...