Shell script wget download from S3 - Forbidden error - linux

I am trying to download a file from Amazon's S3 using a shell script and the command wget. The file in cuestion has public permissions, and I am able to download it using a standard browsers. So far this I what I have in the script:
wget --no-check-certificate -P /tmp/soDownloads https://s3-eu-west-1.amazonaws.com/myBucket/myFolder/myFile.so
cp /tmp/soDownloads/myFile.so /home/hadoop/lib/native
The problem is a bit odd for me. While I am able to download the file directly from the terminal (just typing the wget command), an error pops up when I try to execute the shell script that contains the very same command line (Script ran with >sh myScript.sh).
--2014-06-26 07:33:57-- https://s3-eu-west-1.amazonaws.com/myBucket/myFolder/myFile.so%0D
Resolving s3-eu-west-1.amazonaws.com (s3-eu-west-1.amazonaws.com)... XX.XXX.XX.XX
Connecting to s3-eu-west-1.amazonaws.com (s3-eu-west-1.amazonaws.com)|XX.XXX.XX.XX|:443... connected.
WARNING: cannot verify s3-eu-west-1.amazonaws.com's certificate, issued by ‘/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3’:
Unable to locally verify the issuer's authority.
HTTP request sent, awaiting response... 403 Forbidden
2014-06-26 07:33:57 ERROR 403: Forbidden.
Now, I am aware this can just be a begginer error from my side, but I am not able to detect any mispelling or error of any type. I would appreciate any help you can provide me to solve this issue.
As a note, I would like to notice that I am running the script in a EC2 instance provided by Amazon's Elastic MapReduce framework, if it has something to do with the issue.

I suspect that the editor you used to write that script has left you a little "gift."
The command line isn't the same. Look closely:
--2014-06-26 07:33:57-- ... myFolder/myFile.so%0D
^^^ what's this about?
That's urlencoding for ASCII CR, decimal 13 hex 0x0D. You have an embedded carriage return character in the script that shouldn't be there, and wget is seeing it as the last character in the URL, and sending it to S3.
Using the less utility to view the file, or an editor like vi, this stray character might show up as ^M... or, if they're all over the file, with you open it with vi, you should see this at the bottom of the screen:
"foo" [dos] 1L, 5C
^^^^^
If you see that, then inside vi...
:set ff=unix[enter]
:x[enter]
...will convert the line endings, and save the file in what should be a usable format, if this is really the problem you're having.
If you're editing files on windows, you'll want to use an editor that understands how to save files with newlines, not carriage returns.

Related

Opensips-cli -x command not working in opensips 3.3

Recently I am working on upgrading my opensips version manually from 2.2 to 3.3.
Upgradation is done from my side but in old opensips(2.2) I was able to show registered user(SIP) using opensipsctl ul show command but in new version 3.3 opensipsctl is deprecated(I guess not sure).
So I am trying to get details using opensips-cli but I didn't find out correct command for show register and show dump list, I try to follow below link but did not find correct command.
https://www.opensips.org/Documentation/Interface-CoreMI-3-0
Also, my opensips-cli -x command not working giving the below error. (mi_fifo module loaded correctly)
# opensips-cli -o output_type=yaml -x mi uptime
ERROR: cannot access fifo file /tmp/opensips_fifo: [Errno 13] Permission denied: '/tmp/opensips_fifo'
ERROR: starting with Linux kernel 4.19, processes can no longer read from FIFO files
ERROR: that are saved in directories with sticky bits (such as /tmp)
ERROR: and are not owned by the same user the process runs with.
ERROR: To fix this, either store the file in a non-sticky bit directory (such as /var/run/opensips),
ERROR: or disable fifo file protection using 'sysctl fs.protected_fifos=0' (NOT RECOMMENDED)
/tmp/opensips_fifo file also created correctly.
# ls -l /tmp/opensips_fifo
prw-rw-rw- 1 opensips opensips 0 Dec 29 06:52 /tmp/opensips_fifo
Using opensips-cli command I am able to create database and add table but not able to perform -x command.
Can anyone help me to find out a command for show register and show dump list also any suggestion related -x command not working on opensips-cli.
I had a similar error and i found the following:
if you state in the opensips-cli.cfg file that the fifo_file is located at /tmp/opensips_fifo, it will produce this error, try changing this setting to /var/run/opensips/opensips_fifo

Starting openvpn client with client.ovpn file along with command line options

Is there any way I can start the openvpn client with giving additional command line options like ssl/tls params cert and key instead of mentioning them in client.ovpn file. Currently I have the cert/key in local buffers which I don't want to write to a file and want to give directly to openvpn while starting it along with '--config client.ovpn' option from my python program.
os.system('openvpn --config '+ pathToOpenvpnOvpnFile +' --log '+ pathToOpenvpnLogFile)
I checked there are '--cert' and '--key' options, however they take as file. Any way I can give a variable contents to these options along with providing other necessary details from client.ovpn file with '--config' option.
Is there any way python subprocess can take these '--cert' and '--key' values from buffers instead of file plus the '--config' option with the pathToOpenvpnOvpnFile?
thanks in advance for your replies.

Problems with EXEC pplcd from PeopleSoft Application Engine

On a Unix server, I am running an application engine via the process scheduler.
In it, I am attempting to use a "zip" Unix command from within an "Exec" pplcode function.
However, I only get the error
PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I have tried it several ways. The most logical approach I thought was to change directory back to the root, then change to the specified directory so that I could easily use the zip command, such as the following...
Exec("cd / && cd /opt/psfin/pt850/dat/PSFIN1/PYMNT && zip INVREND INVREND.XML");
1643 12.20.34 0.000048 72: Exec("cd /opt/psfin/pt850/dat/PSFIN1/PYMNT");
1644 12.20.34 0.001343 PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I've even tried the following....just to see if anything works from within an Exec...
Exec("ls");
Sure enough, it gave the same error.
Now, some of you may be wondering, does the account that is associated with the process scheduler actually have authority on this particular directory path on the server ? Well, I was able to create the xml file given in the previous command with no problems.
I just cannot seem to be able to modify it with the Exec issuance of Unix commands.
I'm wondering if this is an error of rights and permissions from the unix server with regards to the operator id that the process scheduler is running from. However, given that it can create and write to a file there, I cannot understand why the Exec command would be met with any resistance....Just my gut shot in the dark...
Any help would be GREATLY appreciated!!!
Thanks,
Flynn
Not sure if you're still having an issue, but in your Exec code, adding the optional %FilePath_Absolute constant should help. When that constant is left off, PS automatically prefixes all commands with <PS_HOME>. You'll have to specify absolute paths with this flag on though. I've changed the command to something that should work.
Exec("zip /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND.XML", %FilePath_Absolute);
The documentation at PeopleBooks is a little confusing sometimes, but it explains it fairly well in this case.
You can always store the absolute location in a variable and prefix that to your commands so you don't have to keep typing out /opt/psfin/pt850/dat/PSFIN1/PYMNT/.

Authentication error from server: SASL(-13): user not found: unable to canonify

Ok, so I'm trying to configure and install svnserve on my Ubuntu server. So far so good, up to the point where I try to configure sasl (to prevent plain-text passwords).
So; I installed svnserve and made it run as a daemon (also installed it as a startup script with the command svnserve -d -r /var/svn).
My repository is in /var/svn and has following configuration (to be found in /var/svn/myrepo/conf/svnserve.conf) (I left comments out):
[general]
anon-access = none
auth-access = write
realm = my_repo
[sasl]
use-sasl = true
min-encryption = 128
max-encryption = 256
Over to sasl, I created a svn.conf file in /usr/lib/sasl2/:
pwcheck_method: auxprop
auxprop_plugin: sasldb
sasldb_path: /etc/my_sasldb
mech_list: DIGEST-MD5
I created it in that folder as the article at this link suggested: http://svnbook.red-bean.com/nightly/en/svn.serverconfig.svnserve.html#svn.serverconfig.svnserve.sasl (and also because it existed and was listed as a result when I executed locate sasl).
Right after that I executed this command:
saslpasswd2 -c -f /etc/my_sasldb -u my_repo USERNAME
Which also asked me for a password twice, which I supplied. All going great.
When issuing the following command:
sasldblistusers2 -f /etc/my_sasldb
I get the - correct, as far as I can see - result:
USERNAME#my_repo: userPassword
Restarted svnserve, also restarted the whole server, and tried to connect.
This was the result from my TortoiseSVN client:
Authentication error from server: SASL(-13): user not found: unable to canonify
user and get auxprops
I have no clue at all in what I'm doing wrong. I've been scouring the web for the past few hours, but haven't found anything but that I might need to move the svn.conf file to another location - for example, the install location of subversion itself. which svn results in /usr/bin/svn, thus I moved the svn.conf to /usr/bin (although that doesn't feel right to me).
Still doesn't work, even after a new reboot.
I'm running out of ideas. Anyone else?
EDIT
I tried changing this (according to what some other forums on the internet told me to do): in the file /etc/default/saslauthd, I changed
START=no
MECHANISMS="pam"
to
START=yes
MECHANISMS="sasldb"
(Actually I had already changed START=no to START=yes before, but I forgot to mention it). But still no luck (I did reboot the whole server).
It looks like svnserve uses default values for SASL...
Check /etc/sasl2/svn.conf to be readable by the svnserver process owner.
If /etc/sasl2/svn.conf is owned by user root, group root and --rw------, svnserve uses the default values.
You will not be warned by any log file entry..
see section 4 of https://svn.apache.org/repos/asf/subversion/trunk/notes/sasl.txt:
This file must be named svn.conf, and must be readable by the svnserve process.
(it took me more than 3 days to understand both svnserve-sasl-ldap and this pitfall at the same time..)
I recommend to install the package cyrus-sasl2-doc and to read the section Cyrus SASL for System Administrators carefully.
I expect this is caused by the SASL API for the call
result = sasl_server_new(SVN_RA_SVN_SASL_NAME,
hostname, b->realm,
localaddrport, remoteaddrport,
NULL, SASL_SUCCESS_DATA,
&sasl_ctx);
if (result != SASL_OK)
{
svn_error_t *err = svn_error_create(SVN_ERR_RA_NOT_AUTHORIZED, NULL,
sasl_errstring(result, NULL, NULL));
SVN_ERR(write_failure(conn, pool, &err));
return svn_ra_svn__flush(conn, pool);
}
as you may see, handling the access failure by svnserve is not foreseen, only Ok or error is expected...
I looked in /var/log/messages and found
localhost svnserve: unable to open Berkeley db /etc/sasldb2: No such file or directory
When I created the sasldb to the above file and got the permissions right, it worked. Looks like it ignores or does not use the sasl database path.
There was another suggestion that rebooting solved the problem but that option was not available to me.

wget and curl somehow modifying bencode file when downloading

Okay so I have a bit of a weird problem going on that I'm not entirely sure how to explain... Basically I am trying to decode a bencode file (.torrent file) now I have tried 4 or 5 different scripts I have found via google and S.O. with no luck (get returns like this in not a dictionary or output error from same )
Now I am downloading the .torrent file like so
wget http://link_to.torrent file
//and have also tried with curl like so
curl -C - -O http://link_to.torrent
and am concluding that there is something happening to the file when I download in this way.
The reason for this is I found this site which will decode a .torrent file you upload online to display the info contained in the file. However when I download a .torrent file by not just clicking on the link through a browser but instead using one of the methods described above it does not work either.
So Has anyone experienced a similar problem using one of these methods and found a solution to the problem or even explain why this is happening ?
As I can;t find much online about it nor know of a workaround that I can use for my server
Update:
Okay as was suggested by #coder543 to compare the file size of download through browser vs. wget. They are not the same size using wget style results in a smaller filesize so clearly the problem is with wget & curl not the something else .. idea's?
Updat 2:
Okay so I have tried this now a few times and I am narrowing down the problem a little bit, the problem only seems to occur on torcache and torrage links. Links from other sites seems to work properly or as expected ... so here are some links and my results from the thrre different methods:
*** differnet sizes***
http://torrage.com/torrent/6760F0232086AFE6880C974645DE8105FF032706.torrent
wget -> 7345 , curl -> 7345 , browser download -> 7376
*** same size***
http://isohunt.com/torrent_details/224634397/south+park?tab=summary
wget -> 7491 , curl -> 7491 , browser download -> 7491
*** differnet sizes***
http://torcache.net/torrent/B00BA420568DA54A90456AEE90CAE7A28535FACE.torrent?title=[kickass.to]the.simpsons.s24e12.hdtv.x264.lol.eztv
wget -> 4890 , curl-> 4890 , browser download -> 4985
*** same size***
http://h33t.com/download.php?id=cc1ad62bbe7b68401fe6ca0fbaa76c4ed022b221&f=Game%20of%20Thrones%20S03E10%20576p%20HDTV%20x264-DGN%20%7B1337x%7D.torrent
wget-> 30632 , curl -> 30632 , browser download -> 30632
*** same size***
http://dl7.torrentreactor.net/download.php?id=9499345&name=ubuntu-13.04-desktop-i386.iso
wget-> 32324, curl -> 32324, browser download -> 32324
*** differnet sizes***
http://torrage.com/torrent/D7497C2215C9448D9EB421A969453537621E0962.torrent
wget -> 7856 , curl -> 7556 ,browser download -> 7888
So I it seems to work well on some site but sites which really on torcache.net and torrage.com to supply files. Now it would be nice if i could just use other sites not relying directly on the cache's however I am working with the bitsnoop api (which pulls all it data from torrage.com so it's not really an option) anyways, if anyone has any idea on how to solve this problems or steps to take to finding a solution it would be greatly appreciated!
Even if anyone can reproduce the reults it would be appreciated!
... My server is 12.04 LTS on 64-bit architecture and the laptop I tried the actual download comparison on is the same
For the file retrieved using the command line tools I get:
$ file 6760F0232086AFE6880C974645DE8105FF032706.torrent
6760F0232086AFE6880C974645DE8105FF032706.torrent: gzip compressed data, from Unix
And sure enough, decompressing using gunzip will produce the correct output.
Looking into what the server sends, gives interesting clue:
$ wget -S http://torrage.com/torrent/6760F0232086AFE6880C974645DE8105FF032706.torrent
--2013-06-14 00:53:37-- http://torrage.com/torrent/6760F0232086AFE6880C974645DE8105FF032706.torrent
Resolving torrage.com... 192.121.86.94
Connecting to torrage.com|192.121.86.94|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.0 200 OK
Connection: keep-alive
Content-Encoding: gzip
So the server does report it's sending gzip compressed data, but wget and curl ignore this.
curl has a --compressed switch which will correctly uncompress the data for you. This should be safe to use even for uncompressed files, it just tells the http server that the client supports compression, but in this case curl does look at the received header to see if it actually needs decompression or not.

Resources