wget: server returned error: HTTP/1.1 202 Accepted - linux

while doing wget from BusyBox v1.23.1 getting an error :
wget: server returned error: HTTP/1.1 202 Accepted
wget call :
wget http://182.72.194.130:7777/device_mgr/device-mgmt/app/cnc/sno/SCNC12J001/updates?cur_fw_ver=1.1(0)7&cur_config_ver=1.0
But when I tried , within ubuntu it worked. How can it be resolved?

HTTP Code 202
The HyperText Transfer Protocol (HTTP) 202 Accepted response status
code indicates that the request has been received but not yet acted
upon.
can mean "got your request okay but the resource is not yet ready"
e.g. a tape archive needs to be mounted. Best to try again a while later. When you repeated your request on Ubuntu the resource was probably mounted.
Wget has some retry parameters you can play with to delay a follow request: see here https://superuser.com/questions/493640/how-to-retry-connections-with-wget/689340#answer-689340

Related

GitLab Health Check without token

I've got GitLab 10.5.6. I'd like to use Health Check information in my monitoring system. I can configure it by using Health Check endpoints with health check access token, but as this solution is depracated, I want to use IP whitelist. And I have some problems with it.
According to this article https://docs.gitlab.com/ee/administration/monitoring/ip_whitelist.html I edited /etc/gitlab/gitlab.rb and added this line (as this GitLab was installed around version 7 or even older I think):
gitlab_rails['monitoring_whitelist'] = ['127.0.0.0/8', '192.168.0.1', 'X.X.X.X', 'Y.Y.Y.Y']
where X.X.X.X is IP of my computer and Y.Y.Y.Y is IP of server with GitLab. After it I executed reconfiguration (gitlab-ctl reconfigure). And started tests... Below logs are from production.log file.
Execution of curl http://127.0.0.1:8888/-/readiness on server Y.Y.Y.Y returns proper JSON with expected data:
Started GET "/-/readiness" for 127.0.0.1 at 2018-03-24 20:01:31 +0100
Processing by HealthController#readiness as /
Completed 200 OK in 27ms (Views: 0.6ms | ActiveRecord: 0.5ms)
Execution of curl http://Y.Y.Y.Y:8888/-/readiness on server Y.Y.Y.Y returns error:
Started GET "/-/readiness" for Y.Y.Y.Y at 2018-03-24 21:20:04 +0100
Processing by HealthController#readiness as /
Filter chain halted as :validate_ip_whitelisted_or_valid_token! rendered or redirected
Completed 404 Not Found in 2ms (Views: 1.0ms | ActiveRecord: 0.0ms)
Accessing address http://Y.Y.Y.Y:8888/-/readiness through Firefox browser on computer X.X.X.X returns error:
Started GET "/-/readiness" for X.X.X.X at 2018-03-24 20:03:04 +0100
Processing by HealthController#readiness as HTML
Filter chain halted as :validate_ip_whitelisted_or_valid_token! rendered or redirected
Completed 404 Not Found in 2ms (Views: 0.8ms | ActiveRecord: 0.0ms)
Accessing address http://Y.Y.Y.Y:8888/-/readiness?token=ZZZZZZZZZZZZZ through Firefox browser on computer X.X.X.X returns proper JSON with expected data.
I don't have any idea what I can check more. Maybe there's lack of any more configuration in /etc/gitlab/gitlab.rb as it's quite old GitLab instance.

Varnish 5.2 started reporting "500 Internal Server Error"

I have been running Varnish for some time and about 6 months ago I added a Varnish 5.2 server that has been running perfectly.
A couple of weeks ago we have started to see odd "500 Internal Server Error" and when looking at older reports they suggest its the server running out of internal memory.
There was a suggestion to tune the parameters (which I have tried) but I am still getting the errors any suggestions on where to look?
Alan
PS The sugested tuning I saw was:
-p workspace_client=160k \
-p workspace_backend=160k \
The up the workspace elements from the default 64k, I tried 128k and then 160k but no change in the reported occasional issues.
You can control the "maximum number of HTTP header lines" varnish allows via the http_max_hdr option. The default is 64 and, in my case, setting it to 128 or 256 solved my problem. Note that, for some reason, the value needs to be set in power of two, so setting it to 100 or 150 will not allow varnish to restart.
https://varnish-cache.org/docs/4.1/reference/varnishd.html#http-max-hdr
After much playing and looking at varnish log:
sudo /usr/local/bin/varnishlog -n -q 'RespStatus eq 500'
I saw the error:
- RespHeader X-1-SM-None: None
- LostHeader X-1-ServerTXT: Live One
- Error out of workspace (req)
- LostHeader X-1-Cache: MISS
- Error out of workspace (Req)
- VCL_return deliver
- Timestamp Process: 1513078776.040695 0.419343 0.000086
- Error out of workspace (Req)
- LostHeader Accept-Ranges: bytes
- LostHeader Connection: keep-alive
- Error out of workspace (Req)
- Error workspace_client overflow
- RespProtocol HTTP/1.1
- RespStatus 500
- RespReason Internal Server Error
Realised that there were too many resp.http in vcl_deliver removing and commenting out some of them which I was using for debug the problem went away.

Network Security Group for Filezilla(client) connection

I am new here.
Few days ago, attended MS azure events, and today registered with Azure (free account).
VM Environment: VM = CentOS 7, apache+php+mysql+vsftpd+phpMyAdmin
everything is up and running, able to visit the "info.php" via its public IP address.
SeLinux = disabled, Firewalld disabled.
my problem is not able to connect this server via Filezilla (PC client).
from Windows command prompt (FTP/put) is working, able to upload files.
But via Filezilla
Status: Connecting to 5x.1xx.1xx.7x:21...
Status: Connection established, waiting for welcome message...
Status: Insecure server, it does not support FTP over TLS.
Status: Logged in
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/home/ftpuser"
Command: TYPE I
Response: 200 Switching to Binary mode.
Command: PORT 192,168,1,183,234,99
Response: 200 PORT command successful. Consider using PASV.
Command: LIST
Error: Connection timed out after 20 seconds of inactivity
Error: Failed to retrieve directory listing
Status: Disconnected from server
Status: Connecting to 5x.1xx.1xx.7x:21...
Status: Connection established, waiting for welcome message...
Status: Insecure server, it does not support FTP over TLS.
Status: Logged in
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/home/ftpuser"
Command: TYPE I
Response: 200 Switching to Binary mode.
Command: PORT 192,168,1,183,234,137
Response: 200 PORT command successful. Consider using PASV.
Command: LIST
Error: Connection timed out after 20 seconds of inactivity
Error: Failed to retrieve directory listing
I believe that is because of the Network Security group settings for inbound and outbound rules, need open some port, but not sure, because I tried open 1024-65535 all allow, still not working.
If you use passive mode FTP, you should open ports 20,21 and ports that you need on Azure NSG(Inbound rules). You could check /etc/vsftpd.conf
pasv_enable=YES
pasv_min_port=60001
pasv_max_port=60005
For this example, you should open ports 60001-60005 on Azure NSG(Inbound rules).

wget not downloading from authentic https url vaultpress

I'm using VaultPress to take my WordPress blog's backup
https://dashboard.vaultpress.com/
After clicking the download backup button, this site sends me a link from where I can download. When I click on this link, it starts downloading my backup in the browser, and that's perfect. But I'm trying to download this in my Ubuntu system using wget or curl but no success till now. Here is what the download URL looks like:
https://dashboard.vaultpress.com/12345/restore/?step=4&job=12345678&check=.
eric#eric:~# wget https://dashboard.vaultpress.com/12345/restore/?step=4&job=12345678&check=<somehashedvalue>
[5] 2229
[6] 2230
[6] Done job=12345678
eric#eric:~# --2015-02-08 02:25:07-- https://dashboard.vaultpress.com/12345/restore/?step=4
Resolving dashboard.vaultpress.com (dashboard.vaultpress.com)... 192.0.96.249, 192.0.96.250
Connecting to dashboard.vaultpress.com (dashboard.vaultpress.com)|192.0.96.249|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: / [following]
--2015-02-08 02:25:09-- https://dashboard.vaultpress.com/
Reusing existing connection to dashboard.vaultpress.com:443.
HTTP request sent, awaiting response... 302 Found
Location: /account/login/ [following]
--2015-02-08 02:25:09-- https://dashboard.vaultpress.com/account/login/
Reusing existing connection to dashboard.vaultpress.com:443.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html?step=4’
[ <=> ] 7,709 --.-K/s in 0s
2015-02-08 02:25:09 (20.9 MB/s) - ‘index.html?step=4’ saved [7709]
PS: The file size is almost 1 GB.
Then I used user/pass:
eric#eric:~# wget --user <myusername> --password <mypassword> https://aboveurl
I even used --ask-password:
eric#eric:~# wget --user <myusername> --ask-password https://aboveurl
But in this case, instead of asking password it completes the action and then asks for the password in another shell (I don't know the exact term), something like this:
eric#eric:~# wget --user <myusername> --ask-password https://dashboard.vaultpress.com/12345/restore/?step=4&job=12345678&check=<hashedvalue>
[1] 1979
[2] 1980
eric#eric:~# Password for user ‘<myusername>’: <mypassword-here>
<mypassword>: command not found
And then finally, I gave a try to curl:
eric#eric:~# curl -u <myusername>:<mypassword> https://dashboard.vaultpress.com/12345/restore/?step=4&job=12345678&check=<hashedvalue>
[5] 2010
[6] 2011
eric#eric:~#
I don't know what's happening? What are those [5] 2010 [6] 2011 or [5] 2229
This solution is also not working:
wget with authentication
The ampersands in your URL make Linux create new processes running in the background. The PID is printed out behind the number in the square brackets.
Write the URL within double quotes and try again:
wget "https://dashboard.vaultpress.com/12345/restore/?step=4&job=12345678&check=<somehashedvalue>"

HTTP request using telnet not getting any response

We are using the telnet mechanism to send http request to server and get the response.
We noticed a strange thing when using the telnet for sending the HTTP GET request.
The first method is working in most of the environments but it's not working in one of the environment. But The second method(instead of relative path, use the complete path) is working fine in this environment.
**
Method1:
**
(printf "GET /test.jsp HTTP/1.0\nAccept: */*\nUser-Agent: WatchDog\n\n"; sleep 9) | telnet xx.xx.xx.xx 8093
Trying xx.xxx.xx.xx...
Connected to xx.xx.xx.xx.
Escape character is '^]'.
Connection closed by foreign host.
**
Method2:
**
(printf "GET http://xx.xx.xx.xx:8093/test.jsp HTTP/1.0\nAccept: */*\nUser-Agent: WatchDog\n\n"; sleep 9) | telnet xx.xx.xx.xx 8093
Trying xx.xx.xx.xx...
Connected to xx.xx.xx.xx.
Escape character is '^]'.
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Set-Cookie: JSESSIONID=91643475E80038EA8770CE6803EE320C; Path=/
Content-Type: text/html;charset=UTF-8
Content-Language: zh-US
Content-Length: 42
Date: Mon, 03 Dec 2012 04:25:09 GMT
Connection: close
The Server is Running
Connection closed by foreign host.
Why the method1 is not running in only one environment? do we need to check some thing in that environment?
Pls give your suggestions...
Thanks,
Sekhar
HTTP/1.0 (RFC 1945) specifies the line ending to be CR LF. Some servers may apply this rule over strictly. Try with sending the request with \r\n as line endings. Sending absolute URIs is also reserved for use by proxies (section 5.1.2 of RFC 1945).
If varying line endings and URI style doesn't help you'll have to look at the servers configuration/implementation, as I can not see anything wrong with method 1.
Apart from the line endings which must be \r\n and your accept header which should be */* instead of /, your first request doesn't have a host name.
An HTTP 1.1 server may deny HTTP requests that do not have a host set, either in the absolute request-URI or in a Host-header.

Resources