I am trying to experiment with Spark and found this demo provided by DataStax:
http://docs.datastax.com/en/datastax_enterprise/4.5/datastax_enterprise/anaHome/anaPortfolioMgrSpark.html
I was able to run the pricer utility and data populated into created tables without any issues.
Then next command is to start the website which I did:
$ cd website
$ sudo ./start
And then open the url on the browser:
http://localhost:8983/portfolio
The website appears to load but a script error comes up:
fname is undefined
The initial error seems to stem from here:
if (mtype == Thrift.MessageType.EXCEPTION) {
var x = new Thrift.TApplicationException();
x.read(this.input);
this.input.readMessageEnd();
throw x;
}
The error is occurring in thrift.js and is shown in the screenshot below:
Any ideas on how to resolve this? I believe this is probably a bug (or limitation) that must be addressed by the DataStax team. I am using DSE on Ubuntu. My best guess is because I don't use localhost in my cassandra.yaml file and the website gives me no way to specify the IP of my database server. There appears to be a duplicate of the cassandra.yaml file in the website resource directory (maybe it uses that?) but it has the same settings as my primary cassandra.yaml file.
Related
I use Apache Airflow for daily ETL jobs. I installed it in Azure Kubernetes Service using the provided Helm chart. It's been running fine for half a year, but since recently I'm unable to access the logs in the webserver (this used to always work fine).
I'm getting the following error:
*** Log file does not exist: /opt/airflow/logs/dag_id=analytics_etl/run_id=manual__2022-09-26T09:25:50.010763+00:00/task_id=copy_device_table/attempt=18.log
*** Fetching from: http://airflow-worker-0.airflow-worker.default.svc.cluster.local:8793/dag_id=analytics_etl/run_id=manual__2022-09-26T09:25:50.010763+00:00/task_id=copy_device_table/attempt=18.log
*** !!!! Please make sure that all your Airflow components (e.g. schedulers, webservers and workers) have the same 'secret_key' configured in 'webserver' section and time is synchronized on all your machines (for example with ntpd) !!!!!
****** See more at https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#secret-key
****** Failed to fetch log file from worker. Client error '403 FORBIDDEN' for url 'http://airflow-worker-0.airflow-worker.default.svc.cluster.local:8793/dag_id=analytics_etl/run_id=manual__2022-09-26T09:25:50.010763+00:00/task_id=copy_device_table/attempt=18.log'
For more information check: https://httpstatuses.com/403
What have I tried:
I've made sure that the log file exists (I can exec into the airflow-worker-0 pod and read the file on command line in the location specified in the error).
I've rolled back my deployment to an earlier commit from when I know for sure it was still working, but it made no difference.
I was using webserverSecretKeySecretName in the values.yaml configuration. I changed the secret to which that name was pointing (deleted it and created a new one, as described here: https://airflow.apache.org/docs/helm-chart/stable/production-guide.html#webserver-secret-key) but it didn't work (no difference, same error).
I changed the config to use a webserverSecretKey instead (in plain text), no difference.
My thoughts/observations:
The error states that the log file doesn't exist, but that's not true. It probably just can't access it.
The time is the same in all pods (I double checked be exec-ing into them and typing date in the command line)
The webserver secret is the same in the worker, the scheduler, and the webserver (I double checked by exec-ing into them and finding the corresponding env variable)
Any ideas?
Turns out this was a known bug with the latest release (2.4.0) of the official Airflow Helm chart, reported here:
https://github.com/apache/airflow/discussions/26490
Should be resolved in version 2.4.1 which should be available in the next couple of days.
As per document (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/composing_a_customized_rhel_system_image/managing-repositories_composing-a-customized-rhel-system-image) tried to override the system repository with custom base url . But blueprint depsolve is showing error as below
##composer-cli blueprints depsolve Test1-blueprint
2022-06-09 08:06:58,841: Test1-blueprint: This system does not have any valid subscriptions. Subscribe it before specifying rhsm: true in sources.
And with next service restart osbuild-composer does not start
ERROR: Info Error: Get "http://localhost/api/v1/projects/source/info/appstream": dial unix /run/weldr/api.socket: connect: connection refused
Am I missing something here ?
Having all manner of issues with this myself. A trawl of my /var/log/messages file, and it looks like, for me at least, osbuild-composer is failing to start due to the non existence of /etc/osbuild-composer/osbuild-composer.toml. Actual error is permission denied, but it doesnt exist..
This is on RHEL 8.5, and just updated to 8.6 this morning, and same problem
/edit Ive removed everything, and reverted to using the lorax backend, as per chapter 2.2 in the doc linked (same one I was following). My 'composer-cli compose types' command now at least works. Fingers crossed..
We are planning to migrate from HMC to Backoffice. I want all the data in HMC to be transferred to Backoffice. when I ran SOLR Full indexing but I am below getting Issue.
ERROR [Thread-6784] (00011GS2) [BackofficeIndexerStrategy] Executing indexer worker as an admin user failed:
ERROR [Thread-645] (00011HJD) [BackofficeIndexerStrategy] Executing indexer worker as an admin user failed:
INFO | jvm 1 | main | 2021/01/13 09:54:59.310 | de.hybris.platform.solrfacetsearch.indexer.exceptions.IndexerException: de.hybris.platform.solrfacetsearch.solr.exceptions.SolrServiceException: Could not check for a remote solr core [master_backoffice_backoffice_product_flip] due to Server refused connection at: http://Dev:8983/solr
Every time I get error. I had to restart my solr server.
Can someone please help me on this.
Thanks
This might be an issue about SSL checks. You can try using disabling the SSL by adding
solrserver.instances.default.ssl.enabled=false
to your local.properties file. This should best only be used on your local development environment.
Or, you can try using https in your default solr server like https://localhost:8983/solr
I am using sap hybris 2205, products displayes in front but, solrserver not working. i try this one also (backoffice.facet-facet conf.->siteindex->enter->finish) getting error.
I can confirm the 3-replica cluster of h2o inside K3s is correctly deployed, as executing in the Python3 interpreter h2o.init(ip="x.x.x.x") works as expected. I followed the instructions noted here: https://www.h2o.ai/blog/running-h2o-cluster-on-a-kubernetes-cluster/
Nevertheless, I had to modify the service.yaml and comment out the line which says clusterIP: None, as K3s was complaining about something related to its inability to set the clusterIP to None. But even though, I can certify it is working correctly, and I am able to use an external IP to connect to the cluster.
If I try to load the dataset using the h2o cluster inside the K3s cluster using the exact same steps as described here http://docs.h2o.ai/h2o/latest-stable/h2o-docs/automl.html, this is the output that I get:
>>> train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
...
h2o.exceptions.H2OResponseError: Server error java.lang.IllegalArgumentException:
Error: Key not loaded: Key<Frame> https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv
Request: POST /3/ParseSetup
data: {'check_header': '0', 'source_frames': '["https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv"]'}
The same error occurs if I use the h2o.upoad_file("x.csv") method.
There is a clue about what may be happening here: Key not loaded: Key<Frame> while POSTing source frame through ParseSetup in H2O API call but I am not using curl, and I can not find any parameter that could help me overcome this issue: http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/h2o.html?highlight=import_file#h2o.import_file
I need to use the Python client inside the same K3s cluster due to different technical reasons, so I am not able to kick off nor Flow nor Firebug to know what may be happening.
I can confirm it is working correctly when I simply issue a h2o.init(), using the local Java instance.
UPDATE 1:
I have tried in different K3s clusters without success. I changed the service.yaml to a NodePort, and now this is the error traceback:
>>> train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
...
h2o.exceptions.H2OResponseError: Server error java.lang.IllegalArgumentException:
Error: Job is missing
Request: GET /3/Jobs/$03010a2a016132d4ffffffff$_a2366be93ec99a78d7bc161de8c54d67
UPDATE 2:
I have tried using different services (NodePort, LoadBalancer, ClusterIP) and none of them work. I also have tried using Minikube with the official image, and with a custom image made by me, without success. I suspect this is something related to either h2o itself, or the clustering between pods. I will keep digging and let's think there will be some gold in it.
UPDATE 3:
I also found out that the post about running H2O in Docker is really outdated https://www.h2o.ai/blog/h2o-docker/ nor is working the Dockerfile present at GitHub (I changed it to uncomment the ENTRYPOINT section without success): https://github.com/h2oai/h2o-3/blob/master/Dockerfile
Even though, I tried with the custom image I built for h2o-k8s and it is working seamlessly in pure Docker. I am wondering why it is still not working in K8s...
UPDATE 4:
I have tried modifying the environment variable called H2O_KUBERNETES_SERVICE_DNS without success.
In the meantime, the cluster started to be unavailable, that is, the readinessProbe's would not successfully complete. No matter what I change now, it does not work.
I spinned up a K3d cluster in local to see what happened, and surprisingly, the readinessProbe's were not failing, using v3.30.0.6. But now I started testing it with R instead of Python. I am glad I tried, because I may have pinpointed what was wrong. There is a version mismatch between the client and the server. So I updated accordingly the image to v3.30.0.1.
But now again, the readinessProbe is not working in my k3d cluster, so I am unable to test it.
It seems it is working now. R client version 3.30.0.1 with server version 3.30.0.1. Also tried with Python version 3.30.0.7 and server version 3.30.0.7 and it started working. Marvelous. The problem was caused by a version mismatch between the client and the server, as the python client was updated to 3.30.0.7 while the latest server for docker was 3.30.0.6.
I'm getting the following error when attempting to run DotNetNuke 7.1 from IIS.
Object reference not set to an instance of an object.
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.NullReferenceException: Object reference not set to an instance of an object.
Source Error:
Line 572: //first call GetProviderPath - this insures that the Database is Initialised correctly
Line 573: //and also generates the appropriate error message if it cannot be initialised correctly
Line 574: string strMessage = DataProvider.Instance().GetProviderPath();
Line 575: //get current database version from DB
Line 576: if (!strMessage.StartsWith("ERROR:"))
I've tried running it from Visual Studio 2012 after downloading and extracting the source code to a folder, then running, but I get the same error (also, VS loads about 13 instances of it's built in webserver which can't be correct).
Clearly, there is something wrong with the database. From what I've read in the past, there should have been a start up configuration page (for configuring settings the first time you run the project).
I did look at the local version of IIS (running on Windows 8) and it created the site fine there, however, for some reason the internal webserver attempts to run (and the option to run on an external IIS is greyed out).
Anyone run into this problem with DNN Community edition? I've tried running as admin and setting permissions with no luck at all.
Any way to fix this?
Ok, the key is to delete the Database.mdf file completely.
Then create a new empty database of your choice in SQL Server (2008 or greater).
Create a new user account with db_owner access (as it must be able to create tables, etc).
Change the connection strings in the release.config and development.config to connect to the database.
DELETE the web.config file.
RENAME either config file to "web.config"
Set the default project to the web project in VS
set the default page to default.aspx
Run
I made the erroneous assumption that running the app would rename the config file for me (not sure why I assumed that).
SOLVED!