I try to execute StudentT() but receive error . Error is
"ImportError: ('DLL load failed: The specified procedure could not be found.', '[Elemwise{log1p,no_inplace}()]')"
If I use Normal(), there is no issue.
from pymc3 import StudentT
with pm.Model() as model:
pm.glm.glm('Returns ~ AAP+CTXS+CAH+LLL', data,
family=glm.families.StudentT())
start = pm.find_MAP()
step = pm.NUTS(scaling=start)
trace = pm.sample(2000, step, start=start)
Related
I'm calling a simple python function in google cloud but cannot get it to save. It shows this error:
"Function failed on loading user code. This is likely due to a bug in the user code. Error message: Error: please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation."
Logs don't seem to show much that would indicate error in the code. I followed this guide: https://blog.thereportapi.com/automate-a-daily-etl-of-currency-rates-into-bigquery/
With the only difference environment variables and the endpoint I'm using.
Code is below, which is just a get request followed by a push of data into a table.
import requests
import json
import time;
import os;
from google.cloud import bigquery
# Set any default values for these variables if they are not found from Environment variables
PROJECT_ID = os.environ.get("PROJECT_ID", "xxxxxxxxxxxxxx")
EXCHANGERATESAPI_KEY = os.environ.get("EXCHANGERATESAPI_KEY", "xxxxxxxxxxxxxxx")
REGIONAL_ENDPOINT = os.environ.get("REGIONAL_ENDPOINT", "europe-west1")
DATASET_ID = os.environ.get("DATASET_ID", "currency_rates")
TABLE_NAME = os.environ.get("TABLE_NAME", "currency_rates")
BASE_CURRENCY = os.environ.get("BASE_CURRENCY", "SEK")
SYMBOLS = os.environ.get("SYMBOLS", "NOK,EUR,USD,GBP")
def hello_world(request):
latest_response = get_latest_currency_rates();
write_to_bq(latest_response)
return "Success"
def get_latest_currency_rates():
PARAMS={'access_key': EXCHANGERATESAPI_KEY , 'symbols': SYMBOLS, 'base': BASE_CURRENCY}
response = requests.get("https://api.exchangeratesapi.io/v1/latest", params=PARAMS)
print(response.json())
return response.json()
def write_to_bq(response):
# Instantiates a client
bigquery_client = bigquery.Client(project=PROJECT_ID)
# Prepares a reference to the dataset
dataset_ref = bigquery_client.dataset(DATASET_ID)
table_ref = dataset_ref.table(TABLE_NAME)
table = bigquery_client.get_table(table_ref)
# get the current timestamp so we know how fresh the data is
timestamp = time.time()
jsondump = json.dumps(response) #Returns a string
# Ensure the Response is a String not JSON
rows_to_insert = [{"timestamp":timestamp,"data":jsondump}]
errors = bigquery_client.insert_rows(table, rows_to_insert) # API request
print(errors)
assert errors == []
I tried just the part that does the get request with an offline editor and I can confirm a response works fine. I suspect it might have to do something with permissions or the way the script tries to access the database.
So I'm following the NCBI instructions available here: https://www.ncbi.nlm.nih.gov/books/NBK52640/
and I can't for the life of me understand what's wrong here.
Here's my PATH and BLASTDB:
And the error message:
And my blastdb directory:
And here's my Python code:
from Bio.Blast.Applications import NcbipsiblastCommandline
import subprocess
psi_cline = NcbipsiblastCommandline('psiblast', db = 'refseq_protein.00',\
query = "results.fasta", evalue = 10 , \
out = "out_psi.xml", outfmt = 7, \
out_pssm ="pssm-results_pssm")
print(psi_cline)
I had this very problem today and it turns out that my blast-database was corrupted in some manner. I recreated my database with makeblastdb and this error went away.
I am trying to run coxph() survival function with Spark_Apply, but I am getting below error
Error in file(con, "r") : cannot open the connection
In addition: Warning message:
In file(con, "r") :
cannot open file 'C:\Users\XXXX\AppData\Local\Temp\1\Rtmpusw8Aw\file344868ef1e07_spark.log': Permission denied
I saw this error whenever I did gave a wrong R command, but in this case my coxph command working fine in R
Spark_Apply with Linear regression reference
https://spark.rstudio.com/
I know spark_apply just try to distribute the data as much as it can. In my question I want to know I am doing in mistake or spark_apply canot run the survival models or should I need to import survival function by any way to sparklr R.
R Code:
coxph(Surv(t_start, t_stop, t_trans)~1,data_input)
O/p:
Null model
log likelihood= -XXX
n= XX
Sparklr code:
data<-copy_to(sc,data_input)
spark_apply(
data,
function(e) coxph(Surv(t_start, t_stop, t_trans)~1,e),
)
O/p:
Error in file(con, "r") : cannot open the connection
In addition: Warning message:
In file(con, "r") :
cannot open file 'C:\Users\XXXX\AppData\Local\Temp\1\Rtmpusw8Aw\file344868ef1e07_spark.log': Permission denied
Set up runtime: python3 and GPU.
Run the code step by step.
I only successfully run the code at first time.
After that, when run the below part, occured "RuntimeError: CUDA error: invalid device function"
sequence = np.array(tacotron2.text_to_sequence(text, ['english_cleaners']))[None, :]
sequence = torch.from_numpy(sequence).to(device='cuda', dtype=torch.int64)
with torch.no_grad():
_, mel, _, _ = tacotron2.infer(sequence)
audio = waveglow.infer(mel)
audio_numpy = audio[0].data.cpu().numpy()
rate = 22050
Do you know the root cause? And does the pre-trained model be run on local CPU?
At the time of writing, you can solve this issue by adding
!pip install torch==1.1.0 torchvision==0.3.0
before import torch
in https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb
I'm trying to read all the csv files in 2 directories using glob module:
import os
import pandas as pd
import glob
def get_list_of_group_df(filepath):
all_group_df_list = []
groups_path = filepath
for file in glob.glob(groups_path):
name = os.path.basename(file)
name = patient_name.partition('_raw')[0]
with open(file, 'r') as name_vcf:
group_vcf_to_df = pd.read_csv(name_vcf, delimiter='\t',
header=0, index_col=False, low_memory=False,
usecols=['A', 'B', 'C', 'D'])
group_df_wo_duplicates = group_vcf_to_df.drop_duplicates()
group_df = group_df_wo_duplicates.reset_index(drop=True)
group_df['group_name'] = name
all_group_df_list.append(group_df)
return all_group_df_list
def get_freq():
group_filepath_dict =
{'1_group':"/home/Raw_group/*.tsv",
'2_group':"/home/Raw_group/*.tsv"}
for group, filepath in group_filepath_dict.items():
print(get_list_of_group_df(filepath))
get_freq()
When I run this script locally, it works just fine. However, running it on UBUNTU server gives me the following error message:
Error in `python3': free(): invalid pointer: 0x00007fcc970d76be ***
Aborted (core dumped)
I'm using python 3.6.3 version. Can anyone tell me how to solve the problem?
I have a similar problem in Python 3.7.3 under Raspbian Buster 2020-02-13. My program dies with free(): invalid pointer except no pointer is given and there is no core dump and no stack trace. So, I have nothing to debug with. This has happened a few times, usually after the program has been running for a day or two, so I suspect it's a very slow memory leak or a very infrequent intermittent bug in Python garbage collection. I am not doing any memory management myself.