I had two scheduled jobs running in my developer program benefit subscription. I had set them up originally using the classic portal. I set them to run every hour since I am on Free Tier. Everything was working fine until I had to redeploy the code (console applications)
First problem - I could not find a way to easily update the code for the scheduled jobs so I deleted them and recreated them I tried to recreate them from the portal, first by creating the WebJob and then by going to the scheduled jobs collection and creating a schedule for the web job.
However each time it runs, it fails with the following error
Http Action - Response from host 'mysite.scm.azurewebsites.net':
'Unauthorized' Response Headers: Date: Thu, 16 Mar 2017 04:07:00 GMT
Server: Microsoft-IIS/8.0
WWW-Authenticate: Basic realm="site"
Body:
401 - Unauthorized: Access is denied due to invalid
credentials.....
And some other html stuff unrelated to the error
I tried also to deploy the job directly from Visual Studio 2015 (latest update)
however the same result occurs, running the job fails with the same error.
It is my understanding that even on free tier I should be able to run a scheduled job (5 of them) every hour.
Why is it failing and complaining about credentials?
EDIT: The job runs from App Service - WebJobs so there's nothing wrong with the job itself, the code executes correctly, I just can't get it to run from the Scheduler.
As far as I know, the “Access is denied due to invalid credentials” error is happened when you don’t set the Authentication information in your
Scheduler Job ‘s action settings tag.
The error is like below:
I suggest you find your webjob’s username and password firstly.
You could find it in the webjob’s properties tag.
Like below:
Then you could set the user name and password in the action’s setting.
It will work well.
Would you be able to run your schedule based off of a Cron expression? The post highlights how to use the settings.job file to provide a schedule to execute on. Here are some example cron expressions for your to figure out if this scenario would work.
Related
Good Morning,
I am currently working on running and monitoring Databricks Notebooks from Airflow. I am currently able to login, spin up a new cluster, and execute a notebook. The issue I have is the monitoring side of things. Using the 2.1 jobs API I am making the following call.
curl --location --request GET 'https://<MY-SERVER>.azuredatabricks.net/api/2.1/jobs/runs/list?job_id=<MY-Job-Id>
On the API page it looks like that API should feed back some stats about the job itself. Most importantly to me is state.result_state as I want to use it as a sensor.
My issue is that when I hit the list endpoint for any of my jobs the only thing I get back is
{
"has_more": false
}
I cant find anywhere in the documentation where I need to add something in the Notebook itself to omit the metric im looking for. Is there maybe an elevated set of permissions I need for my Auth Token that would give me the full set of metrics I am looking for?
I am trying to clean up my gitlab server running v12.x something. I wrote a python script to query the api for the gitlab server and i send a request I get a 201 response code. I used the official docs (https://docs.gitlab.com/ee/api/jobs.html) But the jobs remains in the web ui... I tried deleting the artifacts from the server, and I get a 204 back as a response code.
just by using a simple post command
curl --request POST --header "PRIVATE-TOKEN: <token>" "https://gitlab.corp.com/api/v4/projects/1/jobs/1/erase"
How can one verify that the jobs and are deleted...?
In the admin settings I setup archiving jobs for 1 month and to delete the artifact as well. But, in the admin portal I have 10,000 plus jobs....
The result of the script running, after 4 hrs, the api will not accept the token, and the user account, can't to any git commands for 24 hrs, then returns to normal.... By that I mean, you can't view any code in the web ui, and git commands will not work either....
Has anyone experienced this issue?
The long and the short of it is the api will not do it. I had to update the postgress DB directly to clean them out.
I have a azure app servcice. next i created a deployment slot , shown as web app called myapp/staging.
in visual studio, i deployed to the staging location.
it works for a couple minutes , but then it looks like it was never deployed - see picture
If any error occurs during a slot swap, it's logged in D:\home\LogFiles\eventlog.xml. It's also logged in the application-specific error log.
During custom warm-up, the HTTP requests are made internally (without going through the external URL). They can fail with certain URL rewrite rules in Web.config. Just review your rewrite rules.
As you're publishing it through VS, when you right click and select Publish Web on the left-hand side, you would find the settings tab. Select that. Then, expand the option under File Publish Options and check the Box for “Remove additional files at destination” – Review this option based on your requirement.
Also, just for additional information, an HTTP request to the application root is timed. The swap operation waits for 90 seconds for each HTTP request, and retries up to 5 times. If all retries are timed out, typically the swap operation is stopped.
It's pretty standard spark-submit action, what's weird is that I got 401 from time to time, if I just wait a few minute, I can run again, until next time I got a 401.
it's very similar to this issue
https://jira.apache.org/jira/browse/SPARK-24227
I'm just wondering if I'm not setting the GOOGLE_APPLICATION_CREDENTIAL right when submitting, but at the very beginning I haven't set any credential and can still run.
Anyone has similar issues?
It's a subtle bug with Spark couldnt refresh the access token. One work around is:
Run a dummy kubectl command to refresh access token. E.g: kubectl get namespace
I have two schedule task which is running in coldfusion administrator. They are giving 403 forbidden error when run through coldfusion administrator. Here is the log which i get.
"Information","DefaultQuartzScheduler_Worker-8","02/22/17","10:11:00","","Task default.example - Get detail Dev triggered."
"Information","DefaultQuartzScheduler_Worker-4","02/22/17","10:11:00","","Task default.example - Get detail Live triggered."
"Error","DefaultQuartzScheduler_Worker-8","02/22/17","10:11:00","","403 Forbidden "
"Error","DefaultQuartzScheduler_Worker-4","02/22/17","10:11:00","","403 Forbidden "
The task url is running good through browser. It seems that it is something related to permission problem. I have checked the permission of coldfusion Application 'log on as' user on the CFIDE directory and task url directory. It has full control.
Can anyone guide me to solve this problem.
This post is a little old but I've happened across the same problem and I thought I'd share our solution here.
We're running ColdFusion 2016 on a dedicated Windows 2012R2 box. We have several client sites on our box and we're completely locked down using Peter Freitag's lockdown guide.
This is was a new server migration from a CF10 server on another box. Once I set up the scheduled task exactly as we had done before, I received several "403 Forbidden" responses.
The only real way to troubleshoot this is to activate the "Save output to a file" option on the scheduled task itself and save the file to a directory where your CFUser has write access to. "CFUser" of course is the Windows user your CF service uses.
My first test of the URL was through Chrome on the server and it worked just fine so my URL was valid and of public access.
When I fired the scheduled task, it said "The scheduled task ran successfully" however the results of the output file showed it didn't. In our case, an outside service called "Cloudflare" was blocking our request. The error from Cloudflare asked us to enable cookies which we can't do in a scheduled task request. In our case, our hosting provider must provide an exception for requests made from our server's dedicated IP.
Most of the time, these error are generated because of file permission issues on Windows. If you're sure that your CFUser has read & execute permission to the requested template, then you need to output the scheduled task result to fully understand the error.