How can I see the log of Spark job server task? - apache-spark

I deployed spark job server according https://github.com/spark-jobserver/spark-jobserver. Then I created a job server project, and uploaded to the hob server. While I run the project, how can I see the logs?

It looks like it's not possible to see the logs while running a project. I browsed through the source code and couldn't find any references to a feature like this, and it's clearly not a feature of the ui. It seems like your only option would be to view the logs after running a job, which are stored by default in /var/log/job-server, which you probably already know.

Related

Post-build actions in Python/Linux webapp do not run

I have a Django-based web app deployed from Github, running in Python 3.9. The app deploys and starts successfully.
I need to add post-build actions to complete the deployment; the exceeding common Django task of running "manage.py". Following the general and python-specific docs I have added the an app setting of
POST_BUILD_SCRIPT_PATH=postbuild.sh
There is a shell script, postbuild.sh in the root of my app, which runs fine if I SSH into the running container. The expected behaviour is that after deployment, this should run, and output to the deployment log. Neither of these things happen.
I have tested the app setting POST_BUILD_COMMAND with a very simple echo, and that does nothing either.
Can you tell me either what I need to do to make these app settings work, or suggest an alternative method of running the post-build script?
Please note that this is a Linux app using Oryx, so answers concerning Windows/Kudu like this one aren't related.
I noticed you asked your question over here as well. Your setting needs to be set to a relative path, /postbuild.sh.

Save Logs on Heroku with Node

I'm trying to find a way to store logs so that they can be seen in my website.
I have a website hosted in Heroku, where I use a package like Winston to save logs to a .log file. The problem occurs when using this system in Heroku, as when the dyno restarts every day, the log file gets deleted and a brand new .log file is created.
What would be the best way to store all these logs without them being lost on a dyno restart?
PD: I don't monitor logs but I just want a simple way of storing logs to be viewed by people in my website. Right now it's done by reading the .log file.
One interesting option could be using Papertrail: there is a free plan and with it you get a REST API to query the logs (you can then customise what users see/download).
Papertrail has Heroku integration so pushing the logs from your application should be pretty simple. You can then query/export what your need implementing access via the REST API.
Heroku has also a Papertrail add-on which I think it is the same concept as above but running on Heroku cloud.
Obviously the free plan has a short data retention, you will need to see if this works for you.

How to fix 'You are using an incompatible schema.xml configuration file' error in drupal for solCloud configuration

I am setting up SolrCloud configuration for already existed solr configuration with drupal-7. I have configured zookeeper in 3 different machines and SolrCloud in 2 other machines. All the conf files are present in the configs directory in zookeeper.
Everything is fine till here but communication between drupal and Solr in not happening due to the following error.
Error: "You are using an incompatible schema.xml configuration file. Please follow the instructions in the handbook for setting up Solr."
Currently, Application is running on drupal-7 and solr-7.x-1.13 module is installed.
Till now, I didn't touch any solr configuration files in drupal server.
What else configuration I have to modify here to resolve the schema.xml incompatibility error?
I tried by configuring solrCloud using 5.4.1 and 6.4.1 version but I am getting same error.
In my case, what fixed this issue was killing the solr process and then starting solr again.
First, find the relevant solr process by trying to start solr...
cd /base/path/for/your/solr
bin/solr start
You will see something like...
Port 8983 is already being used by another process (pid: 12345)
Kill whatever process ID is mentioned in the "already being used" message...
kill 12345
Now you should be able to start solr...
bin/solr start
After this restart of solr, I refreshed the page in Drupal and the "incompatible schema.xml" message was gone.
You will have to look at the solr error logs to see what part of your schema.xml is not right.
You'd really have to do this is each one of your solr cloud nodes, since there isn't any guarantee that zookeeper uploaded the correct schema.xml on all shards, and that's why you could be getting that error.
You could use zkcli to upload your configs (https://lucene.apache.org/solr/guide/6_6/command-line-utilities.html), and then reload your collection on all nodes to apply the changes, but even then there's no guarantee it'll work.
To save time and stress, you could just use a SaaS service, such as https://opensolr.com
You can get it setup for free and you get a UI to edit your config files, upload your config files to your server, and a lot of other nice UI features to manage your solr index.

After deployment Web job, execution of blob trigger stopped working

I have deployed my web job on production environment and suddenly blob trigger stops working(Looking into App Insight I know that blob trigger is not called).
If I debug the same code from the local machine then it triggers blob trigger.But stopped working in the production environment.
Always On is enabled.
Also, I have these containers are present
azure-jobs-host-archive,
azure-jobs-host-output,
azure-webjobs-dashboard,
Microsoft.Azure.WebJobs.Host,
Installed packages:-
Microsoft.Azure.Webjobs installed version is 2.0.0,
Microsoft.Azure.WebJobs.Host installed version is 2.0.0
According to your description, we couldn't directly find out why the blob trigger doesn't work. I suggest you could follow below way to troubleshoot by yourself.
If I debug the same code from the local machine then it triggers blob trigger.But stopped working in the production environment.
Since you found your webjob could work well in local, I guess maybe there are something wrong with the connection string config in your web app.
I suggest you could try to follow below way to change the production environment appsetting to set the storage connection string in it.
1.Open the portal and edit the appsetting as below.
2.Create another web app and publish the web job to it and run.
If these two ways all doesn't help you solve the error, I suggest you could post the log/error message and the details codes about the web job.
More details about how to find the log/error message, you could follow below way.

Unable to start Message Engine in Websphere

I am facing a problem while starting the websphere message engine for one of the application deployed on websphere. This application is getting deployed automatically as a part of the installation of Websphere Lombardi 7.2 express edition. It's using websphere 7 internally to deploy it. When I try to start the message engine from the administrative console of websphere I am getting following error:
The messaging engine ProcessCenter01.twperfsvr-twperfsvr_bus cannot be started as there is no runtime initialized for it yet, retry the operation once it has initialized. For the runtime to successfully initialize the hosting server must be started, have its 'SIB service' already enabled, and dynamic configuration reload enabled. If this is a newly configured messaging engine and it is the first messaging engine to be hosted on this server, then it is most likely the 'SIB service' was not previously enabled and thus the server will need to be restarted. The messaging engine runtime might not be initializing because of an error while trying to start, examine the SystemOut.log of the hosting server to check for error messages indicating the problem.
After restarting the server, the same error shows. Can anyone help me to to find what gets loaded as a part of "initialization of runtime"? Are there any config files etc. that I need to check to solve this issue? I am suspecting some missing configuration causing error to load the runtime for this particular application.
I too faced this issue today had to delete all the files under the message store
check the directory-file path mentioned in
Application servers > server1 > Messaging engines > XXX.server1-primaryBus > File store
Just Enable the SIB Services For the particular Server.
Example:Server-->Application Server-->click on Server Name-->right hand side we can see SIB services-->Check box the Enable services.
This will solve your problem
Recently I have faced the same issue when I rebuilt the jvms in UAT envt. After searching on web I found that because of the old messages saved in the message store(flat files in my envt) the messaging engines was not getting initiated. After deleting the old messaging store and restarting the servers it got initialized.
I have struggled with this problem too.
In our situation the problem was that the file message store location was used that was already created for a different (or old) message-engine.
If you add a busmember to the service-bus and use a file store implementation, then you need to supply the path for the store and log folder. Make sure these locations don't exist yet, other wise you will run in the problem above. The message-engine for this member will use these folders.
If you have a script for creating the message-bus infrastructure, make sure that when you delete the bus or remove message-engines, that you remove the file store/log folders for these, before you re-run your script.
Another possibility is that you are using a external database as a data store, and the user that is used for the connection is not allowed to create a database. You might find a ffdc entry like this:
DB2 SQL Error: SQLCODE=-552, SQLSTATE=42502, SQLERRMC=DB2ADMIN;CREATE
SCHEMA, DRIVER=3.61.65
Then you have to go to your database administration tool and give DB2ADMIN the proper privileges. Then restart the server or cluster.
Finally this issue is been resolved. I did not create the schema in SQL Server with same name as that of the username I gave to connect SQL Server during the installation of WLE 7.2
Please find details about this at below link:
http://www.ibm.com/developerworks/forums/message.jspa?messageID=14795282

Resources