Ubuntu apt-get with .pac files - linux

I would like to use Ubuntu's apt-get on my computer, which is behind a company proxy server. Our browsers use a .pac file to figure out the right proxy. In the past, I browsed through that file and manually picked a proxy, which I used to configure apt, and that usually worked. However, they are constantly changing this file, and I'm getting so tired of this that I would love if I could just somehow tell apt to use the .pac file.
I've done some research, but every thread I found regarding this topic so far ended in: 'it's probably not possible, but why don't you just read the .pac file and manually pick a proxy'. It's a pain in the neck and sounds like computer stone age, that's why.
I find it hard to believe that something as - what I believe to be, but I may be wrong - ubiquitous as the .pac method has not been addressed yet by Ubuntu. Can someone give me a definitive answer for that? Are there other distributions that allow that sort of thing?

The console’ish way
If you don’t want to use dconf-editor, or if you use another flavor of Linux, your can use this second method.
Create a .proxy file in your home directory. Have it only read/write-able by yourself, as we will store your credentials in there (including your password).
...
Optional step: if you want your proxy setting to be propagated when you’re using sudo, open the sudo config file with sudo visudo and add the following line after the other Defaults lines:
Defaults env_keep += "http_proxy https_proxy no_proxy"
From http://nknu.net/ubuntu-14-04-proxy-authentication-config/

Related

Strange problem with varnish automatically reverts varnish.service file

I am facing an extremely rare and peculiar problem.
We use Magento 2 in many websites, which uses Varnish almost out of the box. We face problems problems, but they are rare and easily fixable.
Yesterday, we noticed something really strange.
The file /lib/systemd/system/varnish.service is reverting to its default form somehow, without us updating or changing it. By reverting, Varnish stops working (because on its default installation, Varnish is configured on port 6081, but usually everybody changes this to port 80). So the fix is really easy, but it's really frustrating. I saw that on different versions too, both 5 & 6.
Does anybody know if Varnish is autoupdating these files somehow?
Many thanks, I am at your disposal for further explanations.
The fact that /lib/systemd/system/varnish.service is reverted shouldn't really be a problem, because you should have a copy of that file in /etc/systemd/sytem that contains the appropriate values.
If you perform sudo systemctl edit varnish and perform changes, a new file called /etc/systemd/system/varnish.service.d/override.conf will be created.
If you call sudo systemctl edit --full varnish a file called /etc/systemd/sytem/varnish.service will be created.
It's also possible to do this manually by running sudo cp /lib/systemd/system/varnish.service /etc/systemd/system/, but this also requires calling sudo systemctl daemon-reload.
Have a look at the following tutorial, which explains the systemd configuration of Varnish for Ubuntu: https://www.varnish-software.com/developers/tutorials/installing-varnish-ubuntu/#systemd-configuration.
If you're using another distribution you can find the right tutorial on https://www.varnish-software.com/developers/tutorials/#installations.

How to "safely" allow others to work on my server?

I sometimes have a need to pay someone to perform some programming which exceeds my expertise. And sometimes that someone is someone I might not know.
My current need is to configure Apache which happens to be running on Centos.
Giving root access via SSH on my main physical server is not an option.
What are my options?
One thought is to create a VPS (guest as Linux) on my main physical server (operating system as Linux) using virtualbox (or equal), have them do the work, figure out what they did, and manually implement the changes my self.
Seem secure? Maybe better options? Thank you
I suggest looking into the chroot command.
chroot() changes the root directory of the calling process to that specified in path. This directory will be used for pathnames beginning with /. The root directory is inherited by all children of the calling process.
This implications of this, are that once inside a chroot "jail" a user cannot see "outside" of the jail. You've changed their root file. You can include custom binaries, or none at all (I don't see why you'd want that, but point being YOU decide what the developer can and can't see.)
We can use a directory for chroot, or you could use my personal favorite: a mounted file, so your "jail" is easily portable.
Unfortunately I am a Debian user, and I would use
debootstrap to build a minimal system to a small file (say, 5GB), but there doesn't seem to be an official RPM equivalent. However the process is fairly simple. Create a file, I would do so with dd if=/dev/zero of=jailFile bs=1M count=5120. Then we can mkfs.ext4 jailFile. Finally, we must mount and include any files we wish the jailed user to use (this is what debootstrap does. It downloads all the default goodies in /bin and such) either manually or with a tool.
After these steps you can copy this file around, make backups, or move servers even. All with little to no effort on the user side.
From a short google search there appears to be a third party tool that does nearly the same thing as debootstrap, here. If you are comfortable compiling this tool, can build a minimal system manually, or can find an alternative; and the idea of a portable ext4 jail is appealing to you, I suggest this approach.
If the idea is unappealing, you can always chroot a directory which is very simple.
Here are some great links on chroot:
https://wiki.archlinux.org/index.php/Change_root
https://wiki.debian.org/chroot
http://www.unixwiz.net/techtips/chroot-practices.html
Also, here and here are great links about using chroot with OpenSSHServer.
On a side note: I do not think the question was off topic, but if you feel the answers here are inadequate, you can always ask on https://serverfault.com/ as well!
Controlling permissions is some of the magic at the core of Linux world.
You... could add the individual as a non-root user, and then work towards providing specific access to the files you would like him to work on.
Doing this requires a fair amount of 'nixing to get right.
Of course, this is one route... If the user is editing something like an Apache configuration file, why not set-up the file within a private bitbucket or github repository?
This way, you can see the changes that are made, confirm they are suitable, then pull them into production at your leisure.

How to verify if the disabled or stopped unix/linux service has a valid binary path?

I would like to find out a given disabled or stopped unix/linux service, can we verify if it has a valid binary path?
I am not sure if it necessary to check out this? I am newbie to unix/linux world.
Or is it by default all installed unix/linux services are verified with the existence of its binary and path?
For example, the installed unix/linux service, sshd.
Check its status through the command "service sshd status"
Then, check its binary path using command "which sshd".
Is there a better way to check this out?
How if I would like to checkout all stopped/disabled unix/linux services that are either stopped or disabled?
Modern GNU/Linux-Distributions allow to manage software installations using a software management system. Different such systems are in use, but they all share a common principle: a package carries within itself a description of all of its dependencies, dependencies for installation, configuration and runtime. All of these dependencies are checked prior to an installation. So unlike one forces any actions here one has the guarantee that if a package is installed, then all its requirements are fulfilled.
Looking at your example of the sshd (the openssh daemon) this means:
if you installed this package, then you can rely on the fact that all required components are installed and match in their versions and so on. IN this case the daemon control scripts and the main executable are actually contained inside the same package, this is typical for all daemon packages. So if that package is installed, then all of its content is installed.
You could manually check if all contained files of a package are actually existing. But if you really feel this has to be done, for example if you suspect some haves somehow were deleted by accident (which is actually pretty hard to do for such system files), then you can ask the software management system to verify the package integrities! For example on an openSUSE system you can say zypper verify sshd. That command verifies both: if all files mentioned in that package actually exist, are the ones originally installed (unaltered) and have correct permissions. In addition all package dependencies are checked as well. So if no error is thrown you can rely pretty much on the fact that everything is fine.
This is obviously jsut an example, different distributions use different management systems, as mentioned above. But they all offer more or less the same features. And once you understood the idea behind this approach you probably never want to miss that elegance and security any more...

FTP configuration for WordPress

I've installed a WordPress instance on a Linux server, and I need to give it FTP access in order to install plugins and execute automatic backup/restores. I've just installed vsftpd, and started the service, but now what?
How do I figure out/set what the username/pass is?
Should I allow anonymous access?
Is the hostname just 'localhost'?
Any advice would be appreciated. I've never messed with FTP on linux before. Thanks-
Your question is a little unclear because you don't specify what aspect of wordpress "wants" FTP access. If you got WP installed, you clearly have at least some access to the machine already. That said, I'll try to answer around that inclarity.
Your questions in order, then some general thoughts:
How do I figure out/set what the username/pass is?
Remember that the man page for a program is a good first stop. A good man page will also contain a FILES or "SEE ALSO" section near the bottom that will point you to relevant config files.
In this case, "man vsftpd" mentions /etc/vsftpd.conf, so you can then do "man vsftpd.conf" to get info on how to configure it.
VSFTPD is configurable, and can allow users to log in in several ways. In the man page, check out "guest_enable" and "guest_username", "local_enable" and "user_sub_token".
*The easiest route for your single user usage is probably configuring local_enable, then your username and password would be whatever it is in /etc/password.*
Should I allow anonymous access?
No. Since you're using this to admin your Wordpress, there's no reason anyone else should be using this FTP. VSFTPD has this off by default.
Is the hostname just 'localhost'?
Depends where you're coming from. 'localhost' maps back to the loopback, or the same physical machine you're on. So if you need to put ftp configuration information for Server A into a wordpress configuration file on Server A, then 'localhost' is perfectly acceptable. If you're trying to configure the pasv_addr_resolve/pasv_addr flag of VSFTPD, then no, you'll want to either pass in the fully qualified name of Server A (serverA.mydomain.com), or leave it off an rely on the IP address.
EDIT: I actually forgot the critical disclaimer to never send credentials over plain FTP. Plain old FTP (meaning not SFTP) sends your username and password in cleartext. I didn't install VSFTP and play with it, but you'll want to make sure that there is some form of encryption happening when you connect. Try hitting it with WinSCP (from windows) or sftp (from linux) to make sure you're getting an ecrypted SFTP, rather than plaintext FTP.
Apologies if you already knew that ;)
You would probably get better answers on server fault.
That said:
vsftp should use your local users by default, and drop you in that user's home directory on login.
disable anonymous access if you don't need it, I don't think wordpress will care but your server will be safer.
yes, or 127.0.0.1, or your public IP if you think you might split the front and back end some day.
WordPress does not natively support SFTP. You can get around this two ways:
chmod permissions in the appropriate directories to allow the normal, automatic update to work correctly. This is the approach most certain to work, as long as it doesn't trip over any local security policies.
Try hacking it in yourself. There have been any number of threads on this at the WordPress.org forums. Here is a recent one which is also talking about non-standard ports. Here is an article about how to try to get it working on Debian Lenny (which also addresses the non-standard port issue).

Using directory traversal attack to execute commands

Is there a way to execute commands using directory traversal attacks?
For instance, I access a server's etc/passwd file like this
http://server.com/..%01/..%01/..%01//etc/passwd
Is there a way to run a command instead? Like...
http://server.com/..%01/..%01/..%01//ls
..... and get an output?
To be clear here, I've found the vuln in our company's server. I'm looking to raise the risk level (or bonus points for me) by proving that it may give an attacker complete access to the system
Chroot on Linux is easily breakable (unlike FreeBSD). Better solution is to switch on SELinux and run Apache in SELinux sandbox:
run_init /etc/init.d/httpd restart
Make sure you have mod_security installed and properly configured.
If you are able to view /etc/passwd as a result of the document root or access to Directory not correctly configured on the server, then the presence of this vulnerability does not automatically mean you can execute commands of your choice.
On the other hand if you are able view entries from /etc/passwd as a result of the web application using user input (filename) in calls such as popen, exec, system, shell_exec, or variants without adequate sanitization, then you may be able to execute arbitrary commands.
Unless the web server is utterly hideously programmed by someone with no idea what they're doing, trying to access ls using that (assuming it even works) would result in you seeing the contents of the ls binary, and nothing else.
Which is probably not very useful.
Yes it is possible (the first question) if the application is really really bad (in terms of security).
http://www.owasp.org/index.php/Top_10_2007-Malicious_File_Execution
Edit#2: I have edited out my comments as they were deemed sarcastic and blunt. Ok now as more information came from gAMBOOKa about this, Apache with Fedora - which you should have put into the question - I would suggest:
Post to Apache forum, highlighting you're running latest version of Apache and running on Fedora and submit the exploit to them.
Post to Fedora's forum, again, highlighting you're running the latest version of Apache and submit the exploit to them.
It should be noted, include the httpd.conf to both of the sites when posting to their forums.
To minimize access to passwd files, look into running Apache in a sandbox/chrooted environment where any other files such as passwd are not visible outside of the sandbox/chrooted environment...have you a spare box lying around to experiment with it or even better use VMWARE to simulate the identical environment you are using for the Apache/Fedora - try get it to be IDENTICAL environment, and make the httpd server run within VMWare, and remotely access the Virtual machine to check if the exploit is still visible. Then chroot/sandbox it and re-run the exploit again...
Document the step-by-step to reproduce it and include a recommendation until a fix is found, meanwhile if there is minimal impact to the webserver running in sandbox/chrooted environment - push them to do so...
Hope this helps,
Best regards,
Tom.
If you already can view etc/passwd then the server must be poorly configured...
if you really want to execute commands then you need to know the php script running in the server whether there is any system() command so that you can pass commands through the url..
eg: url?command=ls
try to view the .htaccess files....it may do the trick..

Resources