In this documentation: https://docs.craftercms.org/en/3.1/developers/remote-assets.html#by-passing-remote-assets-in-delivery-for-webdav
To avoid proxying the WebDav /remote-assets in Delivery ...
It implies you can run Crafter Delivery with URLs like "/remote-assets/webdav/profile1/mypath/logo.png" just like in Studio. However, the WebDav related configuration is discussed in the context of Studio:
https://docs.craftercms.org/en/3.1/site-administrators/studio/webdav-profiles-configuration.html
I understand not letting delivery proxy WebDav is the right thing to do, but for documentation completeness, how do you configure WebDav profiles for delivery? e.g. what is the XML file path in a delivery-only environment.
The configuration path is the same for Studio and Delivery, in the file that's referenced in the link you provided. Then you publish this file so that Delivery can pick it up.
I see the problem now:
cd /opt/crafter/authoring/data/repos/sites/sample
ls -l sandbox/config/studio/webdav
total 8
-rw-r--r-- 1 michael admin 812 Apr 29 13:26 webdav.xml
ls -l published/config/studio/webdav
ls: published/config/studio/webdav: No such file or directory
The publishing process missed the 'webdav' folder and file altogether. Using the above example, I assume this file should exist:
published/config/studio/webdav/webdav.xml
Is this considered a Deployer bug?
You need to publish the webdav.xml configuration so the Deployer can process it.
Related
I am on a ubuntu machine and writing into a log folder /var/log/APP through cron.daily. The log folder is owned by APP user and needs permissions set as 755 to get the job done. I had to set the permissions of the folder to 755 again and again after finding the permissions being automatically changed to 700.
What can be the possible causes for this kind of behavior?
Content of cron.daily:
00 22 * * 1-5 app app ARG > /var/log/APP/APP.$(date +"\%Y-\%m-\%d").log 2
35 13 * * 2-7 app app ARG > /var/log/APP/APP.$(date +"\%Y-\%m-\%d").log 2
Not 100% sure, but I would guess that you have a logrotate rule set up for this folder. If it's a common application like Apache or MySQL, and you're running a common Linux distro, this is very likely.
Depending on your distro, you should have either a file /etc/logrotate.conf, or a directory /etc/logrotate.d/ with one file per service, or even both.
Check these files if there are rules for the directory in question. If you need the directory to be owned by a different user, you can use the create directive of logrotate (or modify it, if it exists).
But make sure that the original service writing the logs is still able to do so.
I faced the same issue.
It's mostly because of the permission issue associated with the file /var/lib/dpkg/info/nginx-common.postint
Change the chmod 640 "$access_log" to chmod 655 "$access_log"
in both the access_log and error_log
And it's done!
Refer to this link for more info
https://askubuntu.com/questions/794759/annoying-access-problem-on-var-log-nginx
I need to setup http live streaming in centOS , Please any one help in this with step by step configuration ?
I did lot of go-ogling but i dint find the proper solution . Every one tell about using ff mpeg we can achieve this . But not proper procedure.
you need to install a webserver with webdav support.apache with webdav modules activated will do it. the webdav_fs module expands the apache with user rights to write on a predefined directory.so, you create this directory first (eg /opt/webdav/hls/path2stream) and chmod and chown this directory to eg user and group "apache".then, you will have to edit httpd.conf to edit the server name according to the uname convention.finally, you can log the PUT commands by the encoder and the GET commands from the player in the apache log directory (/var/log/httpd/..access_log.
greez, nico
The fix for this is in the bottom part of the post, in the last block with the grey background.
I'm trying to get my Raspberry Pi - which is running the stock version of Debian to be a remote repository for Mercurial. I have set up local repositories on my desktop and laptop (running Mageia) and they work fine locally. I want to be able to push and pull any changes to the Pi. I've set-up OpenVPN on the Pi, so I can access it and, hopefully, push and pull my software from anywhere in the world.
So, I have followed these instructions:
Step-by-step (using Apache2 as my web server) and when I try to connect as in step 9.1.2 with this:
Check if it works by directing your browser to yourhost/hg/.
By putting pi/hg into Firefox, I get an internal server error. (Just putting pi into Firefox gives me the default Apache message and all is good.)
My Apache error log shows me this:
Traceback (most recent call last):
File "/var/hg/hgwebdir.cgi", line 18, in <module>
application = hgweb(config)
File "/usr/lib/python2.7/dist-packages/mercurial/hgweb/__init__.py", line 27, in hgweb
return hgweb_mod.hgweb(config, name=name, baseui=baseui)
File "/usr/lib/python2.7/dist-packages/mercurial/hgweb/hgweb_mod.py", line 34, in __init__
self.repo = hg.repository(u, repo)
File "/usr/lib/python2.7/dist-packages/mercurial/hg.py", line 93, in repository
repo = _peerlookup(path).instance(ui, path, create)
File "/usr/lib/python2.7/dist-packages/mercurial/localrepo.py", line 2350, in instance
return localrepository(ui, util.urllocalpath(path), create)
File "/usr/lib/python2.7/dist-packages/mercurial/localrepo.py", line 79, in __init__
raise error.RepoError(_("repository %s not found") % path)
mercurial.error.RepoError: repository /var/hg/repos not found
[Wed Jan 22 17:23:26 2014] [error] [client 10.8.0.6] Premature end of script headers: hgwebdir.cgi
If I try to connect from Mercurial with remote (http) repository as pi/ I get this in my Apache logs:
[error] [client 10.8.0.6] File does not exist: /var/www/.hg
In my Tortoise HG logs on the local machine I get this:
[command returned code 255 Wed Jan 22 17:24:49 2014]
% hg --repository /path/sqlforms outgoing --quiet --template {node}URL goes here ' does not appear to be an hg repository:
---%<--- (text/html)enter code here
<html><body>
<p>This is the default web page for this server.</p>
<p>The web server software is running but no content has been added, yet. Rob has changed summat</p>
If I use pi/hg as the remote server, Tortoise HG gives me this:
[command returned code 255 Wed Jan 22 17:25:15 2014]
% hg --repository /path/sqlforms outgoing --quiet --template {node}^M http://pi/hg/
HTTP Error: 500 (Internal Server Error)
[command returned code 255 Wed Jan 22 17:25:24 2014]
sqlforms%
/var/hg/repos does exist as a directory.
Hopefully I've given the right amount of info there. I'm no Linux newb, but I am to Apache and fairly new to Mercurial, so I'm probably doing something stupid. AFAIK I have faithfully copied the steps on the web site link I showed above. Is that enough information to troubleshoot? If not, I can supply anything else as needed. Many thanks.
It ended up being a number of different things - some of which Mata mentioned, so I'm putting his/her answer down as he/she was kind enough to point me in the right direction. I'm putting some more details below in case it helps someone else because in all documentation I've found some points aren't well made.
It's important to note that the /var/hg directory that you specify ends up being accessed as server_name/hg when accessing via http. So, if you put a directory on your server in path:
/var/hg/dex
Then this is accessed via http as:
http://serve_name/hg/dex
So, in this case the http access you are "mapping" /var/hg/dex as hg/dex
I think what is super-confusing about the documentation is the way hg is used too much and it would be better described if the base directory structure on your server is something like this:
/var/mercurial_repositories
You would obviously have to set up the Apache config file to point there as its base directory rather than at:
/var/hg.
This would make it far more obvious that you are mapping /var/mercurial_repositories to hg when it comes to remote access. The way it is described it is far too ambiguous with hg being used in too many different places. Whereas this might be obvious to experienced users or someone who's had someone sit down and explain it to them, to a newb, it's very confusing.
Then, the other thing that is not obvious in the documentation is that:
/var/hg/repos
is not a directory for ALL repositories. This is a directory for one repository. I struggled with this for quite some time. Again, the documentation is very misleading for a newb. If it said:
/var/hg/repo (singular) it might be a lot better.
I realise later, tucked away somewhere in one of the pages of documentation is mentions you need subdirectories within repos, but, again, it is very confusing for someone starting out the way this is worded. Something like:
/var/mercurial/repositories_base_directory
Would be far clearer.
Also, for every directory you set up in your base directory, you have to have a new entry in the file:
/var/hg/hgweb.config
This is done like this:
[paths]
c82xx = /var/hg/repos/c82xx
The documentation on this is especially terrible the way it just says:
repos/ = repos/
The issue with these path settings, which, again is explained nowhere (as far as I could see), is that on the left hand side of the equals sign is how your remote machine accesses the directory where your repository is as a subdirectory of:
http:://server/hg
The right hand side is the absolute path on the server. It means you can type a relatively small path while remotely pushing and pulling. This instance:
http:://server/hg/c82xx
Next up, as Mata pointed out, you need to do hg init in the directory on the server, then from the local machine, push whatever you have already got to the server. So, in the directory with .hg on your local machine (in this case my c82xx project:
hg push http:://server/hg/c82xx
There are two more vital thing to note though before you can do this:
1. You need to create, within the .hg directory on the SERVER a file called hgrc and put this in it:
[web]
push_ssl = false
allow_push = *
Now, from what I understand, you should ONLY do this on a trusted network. For me, I'm on a VPN or my LAN, so it's fine.
2. That hgrc file AND all the repository directory and subdirectories have to have permissions to allow writing to these directories and folders.
That should do it. Phew!
I swear, version control is more complex than writing the software in the first place! :D
Specifying a directory as config only works if it's a repository. You're pointing it to a directory which doesn't seem to be a repository.
Maybe you're trying to serve multiple repositories from within that directory? In that case you need to set config to point to a config file where you can then specify the repositories you want to include.
Have a look here, it should describe everything you need.
I have been put in charge of an Ubuntu 13 server installation. Apache is configured to use /var/www as the default directory which is correct. The issue is that it seems there is a fallback directory configured that points to /usr/share. So if I type into a browser (www.address.com) it will serve the documents out of /var/www, but if I know the name of a directory in /usr/share and type in (www.address.com/sharedir) then it will serve out of the /usr/share directory. I have looked in the apache config file and default site config file and do not see this association. I do not want this behavior and am concerned that this is the default behavior out of the box.
Can anyone guide me to another areas where this behavior may be controlled/managed.
Thanks for any assistance.
Open your
/etc/apache2/sites-available/default
file and replace
/var/www
to
/path/to/folder/you/wish
save and it will be better to restart apache by
service apache2 restart
Now put website contents to the new location /path/to/folder/you/wish.
Once you changed the Document root of the of the site as mentioned above, Then no files will be fetched from any other location. Hopes this will help you. :)
[SOLVED] After a bunch more digging around I discovered that the user that originally set up this server erroneously put .conf files in the 'conf.d' directory and 'mods-enabled' directory that were routing traffic to the other directories. Sorry to anyone that noodled on this one.
I have a web application (bugzilla) in apache that needs to use sendmail.cf . When it tries to use sendmail I get the error:
/etc/mail/sendmail.cf: line 0: cannot open: Permission denied
the web application is in group "apache"
Permissions for sendmail look like:
-rw-r--r-- 1 root root 58624 2008-03-29 05:27 sendmail.cf
What do the permissions for sendmail.cf have to look like in order to be accessed by apache but still be secure enough to lock out everyone else.
I have this issue in a Centos 7 and the answer was here:
http://www.mysysadmintips.com/linux/servers/591-sendmail-won-t-send-emails-on-centos-7-permission-denied
Quick 'sestatus' check revealed that the issue was caused by SELinux.
Running: getsebool httpd_can_sendmail returns off, which means that
Apache (httpd) doesn't have permission to send emails.
The issue was resolved by running: setsebool -P httpd_can_sendmail on
You should have a different .cf file for local submissions, usually called (something like) submit.cf - this will have a slightly different batch of settings specifically for SENDING mail (whereas sendmail.cf will be the part for RECEIVING mail). The submit.cf is safe to be globally readable, because (in theory) all processes on the box should be trusted to send email.
Set the user as root and the group as apache: chown root:apache sendmail.cf