I am trying to set a crontab in cPanel but it always show an error:
New lines are not permitted in crontab entries.
Note: I am trying to do it from cPanel web interface.
If you are using a custom cPanel theme this may be caused due to an issue with a recent cPanel update which forces a security token. If you examine the url after hitting the submit button you should notice that the link is missing the security token (cpsess0000000000, for example).
In the past you could get around this by disabling the security token in WHM however this newest release by cPanel has prevented this from being disabled. I would recommend going back to the default theme(x3) and trying again.
I recently ran into this issue myself and am currently building a new cPanel interface for all of my servers. I suppose that it's for the best though. I've been putting this on the back burner for a while.
Just check the contents in "/var/spool/cron/username"
it should be having a special character or a wrong format which is causing the issue from file being edited via the cPanel interface
I faced the same issue, once editing crontab using PHP script, I was no longer able to edit cron jobs from cPanel, and problem was in the windows/linux line breaks in the script:
shell_exec('echo "' . implode("\r\n", $array) . '" | crontab -');
So, replacing \r\n to \n fixed my issue.
Related
I did install AM-5.1.1 and embed it on apache-tomcat-8.5.15 port:9595… looks good. I can start it.
The issue I face is regarding the Configuration Options –> Create Default Configuration.
After entering different passwords for the default OpenAM administrator, and default Policy Agent users, I get the following error:
emb.creatingfamsuffix.failure, refer to install.log under /Users/myUserName/openam for more information.
The install.log file didn’t help that much. Any idea how to solve this?
I got this error message and it was due to the iptables service not allowing the connection. Try turning off the iptables service (if you are running Linux) quickly and trying to run the configurator again. If this solves the problem, turn them back on and add a rule to allow your openam traffic.
Fixed... I had to do with OpenDj which was not installed. Did install and works fine now.
For anyone else that may come across this issue, I received this error because I had XAMPP running. Once I quit XAMPP everything was fine.
While trying to get GitLab Kanban Board to run nicely with my GitLab server I somehow managed to get myself locked out of the latter. Whenever I open the GitLab-URL now there's the message "No authentication methods configured" and no option for logging in.
Unfortunately, I don't even remember the exact settings that I was messing around with at that time, because it was a while ago and it's only now that I found the time for dealing with this problem again. IIRC one of the last things I did was to try and get OAuth working. (And I think that I was changing some settings in the web interface last, not in the settings files.)
Unable to find a solution online, one of the things I tried was to do a backup and then restore that on a different server. But the result is that I then get the same message on the new server also.
Does anyone have any idea on how to recover from this situation? Is there any way for example to enable "normal" login again by changing settings in the database?
If it's not (easily) possible to recover the whole GitLab installation, is there some way to somehow at least extract the bug report data from it? That's the data that I would be most unhappy to loose...
I'd really appreciate any help, because I'm completely at a loss right now!
You can use the Rails console to reenable your sign-in.
sudo gitlab-rails console
s = ApplicationSetting.find_by(signin_enabled: false)
s.signin_enabled = true
s.save
This will modify the rails settings directly.
As of version 10.5.X, use this instead (different ApplicationSetting key)
sudo gitlab-rails console
s = ApplicationSetting.find_by(password_authentication_enabled_for_web: false)
s.password_authentication_enabled_for_web = true
s.save
I've a very big problem. How I can startup ma centos server without the cronjobs defined in the crontab -e?
I've to disable temporally this feature because I can't login anymore in the server because there is an infinite loop in the sh file that start at the login.
I've to disable all cronjobs at startup, but I can't login in the server
Thanks
Davide
Once your server starts, try this command and make the necessary change to the crontab -e:
"/etc/init.d/crond stop" (Without quotes)
Once you use the command, I think your issue will be resolved.
If you using WHM for your server, then follow these steps to stop cronjobs:
1) Login to your control Panel.
2) Go to "Service Manager".
3) Uncheck "Cron Daemon" option.
4) Now, press on Save button.
If you still face any problem, report me so that I can assist you further.
Thanks,
Pratik Jajal
I resolved the problem:
I've insert the hard disk in another computer as external hard drive. After that I followed word par word the following link: http://pissedoffadmins.com/os/mount-unknown-filesystem-type-lvm2_member.html
You can open crontab -e and can delete each job which runs at the scheduled timeframe. I don't think your problem might be related to crontabs.
Will it be possible if you can share us the snapshots of your issue so that we can try our best to fix it?
Thanks,
Pratik Jajal
I am currently trying to automate the process of bamboo remote agent installation and uninstallation. I have run into a problem in regards to adding and removing capabilities.
What I am trying to automate:
(The following is what I do on the bamboo server via the GUI, I want to do this on the remote agent machine via bash script.)
I install the remote agent on a VM machine, then start it up. I go to the bamboo interface and click on the newly created agent's name.
I add a custom capability type, for the key I put 'buildserver' and for the value I put the name of the agent.
I add an 'Executable' capability of type 'Command' with Executable label 'cygwin' and path 'C:\cygwin64\bin\bash'
I navigate to the git executable, and remove it by clicking 'delete.' <--- (the problem step)
what I've done.
I have looked here and found a way to automate steps 1-3 using the following "bamboo-capabilities.properties" file:
buildserver="AGENTNAME"
system.builder.command.cygwin="C:\cygwin64\bin\bash"
However I am stuck on how I would remove the git capability (step 4.) I've tried something appending something like this to the file:
system.git.executable=""
but it does not seem to do anything. Does anyone know how I would do this? There seems to be very little documentation about this online.
Thanks very much.
I never found a way to get around this, but I found a workaround. I later learned the point of removing git in my situation was to allow a shared capability that was also called git to take precedence. My workaround was to set the non-shared capability to the value of the shared capability. I am not 100% sure that this does the same thing, and I am not in a position to test it yet, but as a capability seems to be only a key-value pair I don't see why it wouldn't.... will update if anything breaks.
I've got hgweb up and running on II7 7 (on windows server 2008). The web interface works, and I can view, pull, and clone the repositories there. But I cannot push, doing so gives me a 502 error right after "searching for changes". Using --debug shows the last few lines as:
sending unbundle command
sending 622 bytes
HTTP Error: 502 (Bad Gateway)
I am using TortoiseHG to push, but the result is the same when using the mercurial command line.
I had followed the tutorial here: http://www.sjmdev.com/blog/post/2011/03/30/setting-mercurial-18-server-iis7-windows-server-2008-r2.aspx to setup hgweb.
Looks like an old question but someone is bound to come across it again. I was close to drawing a black circle on a wall and ... anyhow the issue for us was the way central repository was created. We cloned it from BitBucket while being Remote connected to the machine as local administrator.
The issue was in [Repository].hg folder. You need to set correct permissions on it. Try it with adding Everyone -> Full permissions for test purpose. Please make sure you change this to a dedicated network login or appropriate local account afterwards.
I was seeing the exact same behaviour - even push worked fine with exception of getting a Bad Gateway after all the time. After correct permissions were set the issue was gone.
Thinking about it now, probably the best solution is to add each network login that uses the repo to machine users and then set up access permissions to .hg folder to local users.
Hope it helps someone.
Try using the ISAPI module method instead of the CGI that executes phython.exe as documented here. There's also another related, and possibly duplicate question here as well.
Take a look at the 'Push_ssl' setting in your hgweb.config file.
I was getting the same error (had mine set to '*'), and was able to resolve it by removing the line entirely. Granted, this makes Mercurial somewhat less secure, but it lets me get by the configuration issue (for now) while I investigate properly configuring SSL on the server.
You may also have to review the 'Allow_push' setting in order to get past further errors (or take another look at your authorization).
NOTE: At least in my case, having 'push_ssl = false' wasn't enough as that resulted in further errors (authorization failed).
(Again this is simply a temporary solution until the server can be properly secured.)
It could happen by different reasons, to get more details about the error run
hg push --config ui.usehttp2=true --config ui.http2debuglevel=info
For example, problem may occur because of proxy server or just in case when the Mercurial Web Server "forgets" about repositories it needs to serve: in case if you are using TortoiseHg workbench go to Workbench UI, Repository -> Start Web Server, make sure that your repository is in the list of the served repos.
Try use https instead http in .hg/hgrc, I have resolve this problem for code.google.com.
I had this issue, and the problem ended up being the server running out of disk space.