Snakemake gives InputFunctionException when using --profile slurm - slurm

I'm creating a pipeline using snakemake to call methylation in nanopore sequencing data. I've run snakenake using the --dryrun option and the dag is constructed successfully. But when I add the option --profile slurm, I get the following error:
(nanopolish) [danielle.perley#talonhead2 nanopolish-CpG-calling]$ snakemake -np --use-conda --profile slurm test_data/20-001-002/20-001-002_fastq_pass.gz
Building DAG of jobs...
Job counts:
count jobs
1 combine_tech_reps
1
InputFunctionException in line 32 of /home/danielle.perley/nanopolish-CpG-calling/Snakefile:
Error:
SyntaxError: invalid syntax (<string>, line 1)
Wildcards:
sample=20-001-002
Traceback:
File "/home/danielle.perley/miniconda3/envs/nanopolish/lib/python3.7/site-packages/snakemake/executors/__init__.py", line 115, in run_jobs
File "/home/danielle.perley/miniconda3/envs/nanopolish/lib/python3.7/site-packages/snakemake/executors/__init__.py", line 120, in run
File "/home/danielle.perley/miniconda3/envs/nanopolish/lib/python3.7/site-packages/snakemake/executors/__init__.py", line 131, in _run
File "/home/danielle.perley/miniconda3/envs/nanopolish/lib/python3.7/site-packages/snakemake/executors/__init__.py", line 151, in printjob
File "/home/danielle.perley/miniconda3/envs/nanopolish/lib/python3.7/site-packages/snakemake/executors/__init__.py", line 137, in printjob
Line 33 is rule combine_tech_reps in my snakefile. (I'm only showing the first part of my snakefile here)
from snakemake.utils import validate
import pandas as pd
import os.path
import glob
configfile: "config.yaml"
samples_df = pd.read_table(config["samples"],sep = '\t')
samples_df = samples_df.set_index("Sample")
samples = list(samples_df.index.unique())
wildcard_constraints:
sample = "|".join(samples)
def get_fast5(wildcards):
f5 = glob.glob(os.path.join(config["raw_data"],wildcards.sample,"2*","fast5_pass"))
return(f5)
localrules: all,build_index
rule all:
input:
expand("results/Methylation/{sample}_frequency.tsv",sample=samples),
expand("results/alignments/{sample}_flagstat.txt",sample=samples),
expand("resources/QC/{sample}_pycoQC.json",sample=samples),
expand("results/QC/{sample}_pycoQC.html",sample=samples),
"report/multiQC.html"
rule combine_tech_reps:
input:
fqs = lambda wildcards: glob.glob(os.path.join(config["raw_data"],"{sample}","2*","{sample}_fastq_pass.gz").format(sample=wildcards.sample))
output:
fq = os.path.join(config["raw_data"],"{sample}","{sample}_fastq_pass.gz")
shell: """
zcat {input} > {output}
"""
I have a slurm profile file in the directory:
~/.config/snakemake/slurm/config.yaml
jobs: 10
cluster: "sbatch -p talon -t {resources.time} --mem={resources.mem} -c {resources.cpus} -o logs_slurm/{rule}_{wildcards} -e logs_slurm/{rule}_{wildcards}"
default-resources: [cpus=1, mem=2000, time=10:00]
use-conda: true
I'd really like to use this pipeline on our HPC, but I'm not sure what's causing this error.

I was able to solve my problem with the help of this post:
InputFunctionException: unexpected EOF while parsing
By adding the verbose flag:
snakemake -np --verbose --use-conda --profile slurm test_data/20-001-002/20-001-002_fastq_pass.gz
I could see that snakemake was having issues with the default resources:
10:00
^
Changing the default resources line of my config.yaml file:
default-resources: [cpus=1, mem=2000, time=600]
removed the error.

I am not sure if default-resources is a valid key in the config.
What happens if you try this as config.yaml:
jobs: 10
cluster: "sbatch -p talon -t {resources.time} --mem={resources.mem} -c {resources.cpus} -o logs_slurm/{rule}_{wildcards} -e logs_slurm/{rule}_{wildcards}"
use-conda: true
__default__:
time: 10
cpus: 1
mem: 2GB

Related

How can catch the error info when psql command embedded in python code?

The data can be imported in bash console:
psql -U postgres -d sample -c "copy data(f1,f2) from '/tmp/data.txt' with delimiter ',' "
Pager usage is off.
Timing is on.
COPY 1
Time: 9.573 ms
I remove with delimiter clause to create an error:
psql -U postgres -d sample -c "copy data(f1,f2) from '/tmp/data.txt' "
Pager usage is off.
Timing is on.
ERROR: missing data for column "f2"
CONTEXT: COPY data, line 1: ""x1","y1""
Time: 0.318 ms
All the error info shown on the bash console,i want to catch the error info when psql command embedded in python code:
import os
import logging
logging_file = '/tmp/log.txt'
logging.basicConfig(filename=logging_file,level=logging.INFO,filemode='a+')
logger = logging.getLogger("import_data")
sql_string ="""
psql -U postgres -d sample -c "copy data(f1,f2) from '/tmp/data.txt' "
"""
try:
os.system(sql_string)
except Exception as e:
logger.info(e)
Why the error info can't be written into the log file /tmp/log.txt?How can catch the error info when psql command embedded in python code?
It is likely that the error produced by os.system() is not being captured by the try-block. os.system() can raise an OSError if the command fails, but it is possible that the error is not being raised and caught by the try block.
You can use the subprocess module instead of os.system() to run the command and capture the output and error streams
Try this code:
import logging
import subprocess
sql_string = """ psql -U postgres -d sample -c "copy data(f1,f2) from '/tmp/data.txt' " """
logging_file = './log.txt'
logging.basicConfig(filename=logging_file, level=logging.DEBUG, filemode='a+')
try:
result = subprocess.run(['psql', '-U', 'postgres', '-d', 'sample', '-c', 'copy data(f1,f2) from \'/tmp/data.txt\''],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if result.returncode != 0:
raise Exception(result.stderr.decode('utf-8'))
except Exception as e:
logging.info(e)
# The below line will help to get traceback of exception.
# logging.exception(e)

Make a shell pipeline started from subprocess.Popen fail if the left-hand side of a pipe fails

Im running a bash command with subprocess.popen in python:
cmd = "bwa-mem2/bwa-mem2 mem -R \'#RG\\tID:2064-01\\tSM:2064-01\\tLB:2064-01\\tPL:ILLUMINA\\tPU:2064-01\' reference_genome/human_g1k_v37.fasta BHYHT7CCXY.RJ-1967-987-02.2_1.fastq BHYHT7CCXY.RJ-1967-987-02.2_2.fastq -t 14 | samtools view -bS -o dna_seq/aligned/2064-01/2064-01.6.bam -"
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True)
The problem is that I get returncode 0 even if the first command fails.
I have googled and found out about pipefail and it seems that this is what I should use.
However, I don't understand where to write it. I have tried:
"set -o pipefail && bwa-mem2/bwa-mem2 mem -R \'#RG\\tID:2064-01\\tSM:2064-01\\tLB:2064-01\\tPL:ILLUMINA\\tPU:2064-01\' reference_genome/human_g1k_v37.fasta BHYHT7CCXY.RJ-1967-987-02.2_1.fastq BHYHT7CCXY.RJ-1967-987-02.2_2.fastq -t 14 | samtools view -bS -o dna_seq/aligned/2064-01/2064-01.6.bam -"
which gives: /bin/sh: 1: set: Illegal option -o pipefail
any ideas how I should incorporate this?
Edit:
I'm not sure if it is correct to edit my answer when responding to an answer? there was not enough characters to respond in a comment:/
Anyway,
I tried your second approach without shell=True #Charles Duffy.
(cmd_1 and cmd_2 are equal to what you wrote in your solution)
This is the code I use:
try:
p1 = Popen(shlex.split(cmd_1), stdout=PIPE)
p2 = Popen(shlex.split(cmd_2), stdin=p1.stdout, stdout=PIPE, stderr=STDOUT, text=True)
p1.stdout.close()
output, error = p2.communicate()
p1.wait()
rc_1 = p1.poll()
rc_2 = p2.poll()
print("rc_1:", rc_1)
print("rc_2:", rc_2)
if rc_1 == 0 and rc_2 == 0:
self.log_to_file("DEBUG", "# Process ended with returncode = 0")
if text: self.log_to_file("INFO", f"{text} succesfully
else:
print("Raise exception")
raise Exception(f"stdout: {output} stderr: {error}")
except Exception as e:
print(f"Error: {e} in misc.run_command()")
self.log_to_file("ERROR", f"# Process ended with returncode != 0, {e}")
this is the result i get when deliberately causing an error by renaming one file:
[E::main_mem] failed to open file `/home/jonas/BASE/dna_seq/reads/2064-01/test_BHYHT7CCXY.RJ-1967-987-02.2_2.fastq.gz'.
free(): double free detected in tcache 2
rc_1: -6
rc_2: 0
Raise exception
Error: stdout: stderr: None in misc.run_command()
ERROR: # Process ended with returncode != 0, stdout: stderr: None
It seems to capture the faulty returncode.
But why is stdout empty and stderr= None?
How can I capture the output to have it logged to a logger both when the process is successful and when it fails?
First, With A Shell
Instead of letting shell=True specify sh by default, specify bash explicitly to ensure that pipefail is an available feature:
shell_script = r'''
set -o pipefail || exit
bwa-mem2/bwa-mem2 mem \
-R '#RG\tID:2064-01\tSM:2064-01\tLB:2064-01\tPL:ILLUMINA\tPU:2064-01' \
reference_genome/human_g1k_v37.fasta \
BHYHT7CCXY.RJ-1967-987-02.2_1.fastq \
BHYHT7CCXY.RJ-1967-987-02.2_2.fastq \
-t 14 \
| samtools view -bS \
-o dna_seq/aligned/2064-01/2064-01.6.bam -
'''
process = subprocess.Popen(["bash", "-c", shell_script],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True)
This works, but it's not the best available option.
Second, With No Shell At All
p1 = subprocess.Popen(
['bwa-mem2/bwa-mem2', 'mem',
'-R', r'#RG\tID:2064-01\tSM:2064-01\tLB:2064-01\tPL:ILLUMINA\tPU:2064-01',
'reference_genome/human_g1k_v37.fasta',
'BHYHT7CCXY.RJ-1967-987-02.2_1.fastq',
'BHYHT7CCXY.RJ-1967-987-02.2_2.fastq', '-t', '14'],
stdout=subprocess.PIPE)
p2 = subprocess.Popen(
['samtools', 'view', '-bS',
'-o', 'dna_seq/aligned/2064-01/2064-01.6.bam', '-'],
stdin=p1.stdout,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True)
p1.stdout.close()
output, _ = p2.communicate() # let p2 finish running
p1.wait() # ensure p1 has properly exited
print(f'bwa-mem2 exited with status {p1.returncode}')
print(f'samtools exited with status {p2.returncode}')
...which lets you check p1.returncode and p2.returncode separately.

Ansible Zypper module error when I try to update packages

Good day all!
I have written a very simple Ansible Role to update all packages to Suse Leap 15.2:
- name: All packages updated
package:
name: "*"
state: latest
but it seems that the Zypper module has a problem with it:
TASK [system_update : All packages updated] ***************************************************************************************************************************************************************************************************
task path: /home/merlin/ansible-kt-linux/roles/system_update/tasks/main.yml:10
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: merlin
<localhost> EXEC /bin/sh -c 'echo ~merlin && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/merlin/.ansible/tmp/ansible-tmp-1617094154.778992-48329012899811 `" && echo ansible-tmp-1617094154.778992-48329012899811="` echo /home/merlin/.ansible/tmp/ansible-tmp-1617094154.778992-48329012899811 `" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/zypper.py
<localhost> PUT /home/merlin/.ansible/tmp/ansible-local-5239dx5tukgw/tmpvf5upp37 TO /home/merlin/.ansible/tmp/ansible-tmp-1617094154.778992-48329012899811/AnsiballZ_zypper.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/merlin/.ansible/tmp/ansible-tmp-1617094154.778992-48329012899811/ /home/merlin/.ansible/tmp/ansible-tmp-1617094154.778992-48329012899811/AnsiballZ_zypper.py && sleep 0'
<localhost> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-qfmrjmpwqhyapufsdqunaohtmlxjucdk ; /usr/bin/python /home/merlin/.ansible/tmp/ansible-tmp-1617094154.778992-48329012899811/AnsiballZ_zypper.py'"'"' && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/merlin/.ansible/tmp/ansible-tmp-1617094154.778992-48329012899811/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/home/merlin/.ansible/tmp/ansible-tmp-1617094154.778992-48329012899811/AnsiballZ_zypper.py", line 102, in <module>
_ansiballz_main()
File "/home/merlin/.ansible/tmp/ansible-tmp-1617094154.778992-48329012899811/AnsiballZ_zypper.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/merlin/.ansible/tmp/ansible-tmp-1617094154.778992-48329012899811/AnsiballZ_zypper.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.packaging.os.zypper', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/lib64/python2.7/runpy.py", line 188, in run_module
fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 82, in _run_module_code
mod_name, mod_fname, mod_loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/tmp/ansible_zypper_payload_jYlnfB/ansible_zypper_payload.zip/ansible/modules/packaging/os/zypper.py", line 195, in <module>
ImportError: No module named xml
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/home/merlin/.ansible/tmp/ansible-tmp-1617094154.778992-48329012899811/AnsiballZ_zypper.py\", line 102, in <module>\n _ansiballz_main()\n File \"/home/merlin/.ansible/tmp/ansible-tmp-1617094154.778992-48329012899811/AnsiballZ_zypper.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/merlin/.ansible/tmp/ansible-tmp-1617094154.778992-48329012899811/AnsiballZ_zypper.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.packaging.os.zypper', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python2.7/runpy.py\", line 188, in run_module\n fname, loader, pkg_name)\n File \"/usr/lib64/python2.7/runpy.py\", line 82, in _run_module_code\n mod_name, mod_fname, mod_loader, pkg_name)\n File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/tmp/ansible_zypper_payload_jYlnfB/ansible_zypper_payload.zip/ansible/modules/packaging/os/zypper.py\", line 195, in <module>\nImportError: No module named xml\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
PLAY RECAP ************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
unfortunately I can't read from this what exactly the problem is. Do any of you know the problem?
solved it with shell:
- name: "Install python-xml on Suse"
shell: zypper -n install python-xml

How to quote part of a subprocess.run list? [duplicate]

This question already has answers here:
Python Subprocess: Unable to Escape Quotes
(2 answers)
Closed last year.
I need to quote part of the rsync line that subprocess.run uses that contains the ssh parameters, unfortunately nothing I have tried has worked so far.
Can someone please advise me on the correct way to quote the ssh parameters, so that it will run under rsync.
At first I had a list of lists that got passed to subprocess.run, that fails with:
Traceback (most recent call last):
File "./tmp.py", line 20, in <module>
process = subprocess.run(rsync_cmd, stderr=subprocess.PIPE)
File "/usr/lib/python3.6/subprocess.py", line 423, in run
with Popen(*popenargs, **kwargs) as process:
File "/usr/lib/python3.6/subprocess.py", line 729, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.6/subprocess.py", line 1295, in _execute_child
restore_signals, start_new_session, preexec_fn)
TypeError: expected str, bytes or os.PathLike object, not list
Flatten it to an ordinary list:
Unexpected remote arg: example.com:/var/log/maillog
rsync error: syntax or usage error (code 1) at main.c(1361) [sender=3.1.2]
Which makes sense, as part of the command line for rsync needs to be quoted.
So I try to quote it:
rsync: Failed to exec /usr/bin/ssh -F /home/rspencer/.ssh/config -o PreferredAuthentications=publickey -o StrictHostKeyChecking=accept-new -o TCPKeepAlive=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=24 -o ConnectTimeout=30 -o ExitOnForwardFailure=yes -o ControlMaster=autoask -o ControlPath=/run/user/1000/foo-ssh-master-%C -l root -p 234 -o Compression=yes: No such file or directory (2)
rsync error: error in IPC code (code 14) at pipe.c(85) [Receiver=3.1.2]
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: error in IPC code (code 14) at io.c(235) [Receiver=3.1.2]
Which is due, I expect, to it being a string instead of a list. Although I'm guessing and that does not make complete sense to me.
Summarized code of my last attempt:
#!/usr/bin/python3
import subprocess
ssh_args = [
"-F",
"/home/rspencer/.ssh/config",
"-o",
"PreferredAuthentications=publickey",
"-o",
"StrictHostKeyChecking=accept-new",
"-o",
"TCPKeepAlive=yes",
"-o",
"ServerAliveInterval=5",
"-o",
"ServerAliveCountMax=24",
"-o",
"ConnectTimeout=30",
"-o",
"ExitOnForwardFailure=yes",
"-o",
"ControlMaster=autoask",
"-o",
"ControlPath=/run/user/1000/foo-ssh-master-%C",
"-l",
"root",
"-p",
"234",
]
rsync_params = []
src = "example.com:/var/log/maillog"
dest = "."
# Build SSH command
ssh_cmd = ["/usr/bin/ssh"] + ssh_args
# Use basic compression
ssh_cmd.extend(["-o", "Compression=yes"])
ssh_cmd = " ".join(ssh_cmd)
ssh_cmd = f'"{ssh_cmd}"'
# Build rsync command
rsync_cmd = ["/usr/bin/rsync", "-vP", "-e", ssh_cmd] + rsync_params + [src, dest]
# Run rsync
process = subprocess.run(rsync_cmd, stderr=subprocess.PIPE)
if process.returncode != 0:
print(process.stderr.decode("UTF-8").strip())
What the correct command would look like on the command line:
/usr/bin/rsync -vP -e "/usr/bin/ssh -F /home/rspencer/.ssh/config -o \
PreferredAuthentications=publickey -o StrictHostKeyChecking=accept-new -o \
TCPKeepAlive=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=24 -o \
ConnectTimeout=30 -o ExitOnForwardFailure=yes -o ControlMaster=autoask \
-o ControlPath=/run/user/1000/foo-ssh-master-%C -l root -p 234 -o \
Compression=yes" example.com:/var/log/maillog .
Turns out the trick is to not try to quote it.
I removed the following line and it worked without further modification:
ssh_cmd = f'"{ssh_cmd}"'
I've read so much documentation and missed it until asking the question. Murphy.
Rereading the post "How not to quote argument in subprocess?" and finally understanding what Greg Hewgill was saying helped me. I blame lack of sleep.
"If you use quotes on the shell command line, then put the whole contents in one element of args (without the quotes). ..." - Greg Hewgill

MLflow Error while deploying the Model to local REST server

System Details:
Operating System: Ubuntu 19.04
Anaconda version: 2019.03
Python version: 3.7.3
mlflow version: 1.0.0
Steps to Reproduce: https://mlflow.org/docs/latest/tutorial.html
Error at line/command: mlflow models serve -m [path_to_model] -p 1234
Error:
Command 'source activate mlflow-c4536834c2e6e0e2472b58bfb28dce35b4bd0be6 1>&2 && gunicorn --timeout 60 -b 127.0.0.1:1234 -w 4 mlflow.pyfunc.scoring_server.wsgi:app' returned non zero return code. Return code = 1
Terminal Log:
(mlflow) root#user:/home/user/mlflow/mlflow/examples/sklearn_elasticnet_wine/mlruns/0/e3dd02d5d84545ffab858db13ede7366/artifacts/model# mlflow models serve -m $(pwd) -p 1234
2019/06/18 16:15:16 INFO mlflow.models.cli: Selected backend for flavor 'python_function'
2019/06/18 16:15:17 INFO mlflow.pyfunc.backend: === Running command 'source activate mlflow-c4536834c2e6e0e2472b58bfb28dce35b4bd0be6 1>&2 && gunicorn --timeout 60 -b 127.0.0.1:1234 -w 4 mlflow.pyfunc.scoring_server.wsgi:app'
bash: activate: No such file or directory
Traceback (most recent call last):
File "/root/anaconda3/envs/mlflow/bin/mlflow", line 10, in <module>
sys.exit(cli())
File "/root/anaconda3/envs/mlflow/lib/python3.7/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/root/anaconda3/envs/mlflow/lib/python3.7/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/root/anaconda3/envs/mlflow/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/root/anaconda3/envs/mlflow/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/root/anaconda3/envs/mlflow/lib/python3.7/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/root/anaconda3/envs/mlflow/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/root/anaconda3/envs/mlflow/lib/python3.7/site-packages/mlflow/models/cli.py", line 43, in serve
host=host)
File "/root/anaconda3/envs/mlflow/lib/python3.7/site-packages/mlflow/pyfunc/backend.py", line 76, in serve
command_env=command_env)
File "/root/anaconda3/envs/mlflow/lib/python3.7/site-packages/mlflow/pyfunc/backend.py", line 147, in _execute_in_conda_env
command, rc
Exception: Command 'source activate mlflow-c4536834c2e6e0e2472b58bfb28dce35b4bd0be6 1>&2 && gunicorn --timeout 60 -b 127.0.0.1:1234 -w 4 mlflow.pyfunc.scoring_server.wsgi:app' returned non zero return code. Return code = 1
(mlflow) root#user:/home/user/mlflow/mlflow/examples/sklearn_elasticnet_wine/mlruns/0/e3dd02d5d84545ffab858db13ede7366/artifacts/model#
Following the steps mentioned in the GitHub Issue 1507 (https://github.com/mlflow/mlflow/issues/1507) I was able to resolve this issue.
In reference to this post, the "anaconda/bin/" directory is never added to the list of environment variables i.e. PATH variable. In order to resolve this issue, add the "else" part of conda initialize code block from ~/.bashrc file to your PATH variable.
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/home/atulk/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/home/atulk/anaconda3/etc/profile.d/conda.sh" ]; then
. "/home/atulk/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/home/atulk/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
In this case, I added export PATH="/home/atulk/anaconda3/bin:$PATH" to the PATH variable. However, this is just a temporary fix until the issue is fixed in the project.
export PATH=$PATH:/path/to/python/Python/2.7/bin
can be used when you are not using anaconda

Resources