While running the curl command using both (credentials & without) I always get location output which is correct & due to this I get
HTTP/1.1 302 Found this output but in reality application is down.
any idea/help how to by pass or check the correct output.
[root#VDCLP3213 ~]# curl -Ik http://grid-net.gs.ec.ge.com/GestionHeures --user (username:Password)
HTTP/1.1 302 Found
Date: Tue, 23 Jan 2018 10:14:52 GMT
Expires: Wed, 01 Jan 1997 12:00:00 GMT
Cache-Control: private,no-store,no-cache,max-age=0
Location: https://fss.gecompany.com/fss/idp/SSO.saml2?SAMLRequest=fZHBbsIwEER%2FJfI9cRJCQRaJlMKhSLSghvbQS%2BU4S7Dk2KnXKeXva6BV6YWrPfN2ZneGvFM9Kwe318%2FwMQC64KtTGtn5IyeD1cxwlMg07wCZE6wqH1csjWLWW%2BOMMIoEJSJYJ42eG41DB7YC%2BykFvDyvcrJ3rkdGaWtlE2pwUYsRiKiFSJiOVntZ10aB20eIhp7gKd2sqy0JFj6N1PzE%2FaPsEL3VO3uuj2eCf6Gy6WlVraNT6pQEy0VO3rPxpG5EPJnuGj6eTtLdiIsk4TGI%2BI5Ps8zLEAdYanRcu5ykcTIN4yRMR9skZknGxukbCTY%2FJe%2BlbqRub2%2BkvoiQPWy3m%2FDS4hUsnht4ASlmp4TsPNhebfo2lv%2BulxQthl4fHkD56hD2xjouVdjbZkav0Jc5PXvyrOViY5QUx6BUyhzmFriDnCSEFhfL%2F%2FMX3w%3D%3D&RelayState=ss%3Amem%3A7871d5ec2f67dc36f0c796d589df7cc5f38664a8a79eb7daa3d8f80059eb8259
Connection: close
Content-Type: text/html; charset=iso-8859-1
Please help
Use the -L or --location option to follow redirects.
Note that this will still show the headers for all the intermediate sites, you'll need to parse out the last HTTP/1.1 line and its following headers to get the headers from the final target.
$ curl -s -I -L online.bridgebase.com/purchase/pay.php
HTTP/1.1 302 Moved Temporarily
Server: nginx/1.6.2
Date: Tue, 23 Jan 2018 11:04:23 GMT
Content-Type: text/html
Content-Length: 160
Connection: close
Location: https://www.bridgebase.com/purchase/pay.php
Set-Cookie: SRV=www2.dal06.sl; path=/; domain=.bridgebase.com
HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Tue, 23 Jan 2018 11:04:23 GMT
Content-Type: text/html; charset=utf-8
Connection: close
Vary: Accept-Encoding
X-Powered-By: PHP/5.4.45-0+deb7u11
Set-Cookie: PHPSESSID=og3dirjhdi4lhtm17iav8kgm67; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Set-Cookie: SRV=www1.dal09.sl; path=/; domain=.bridgebase.com
imac:barmar $ curl --version
curl 7.54.0 (x86_64-apple-darwin14.5.0) libcurl/7.54.0 OpenSSL/1.0.2k zlib/1.2.5 libssh2/1.8.0
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP UnixSockets HTTPS-proxy
Related
Problem description
When making a HTTP POST request to my Express app (Node.js) the server responds with two "100 Continue" responses and after that a "200 OK" response, when there should only be one "100 Continue".
Postman manages to ignore the second "100 continue" and reports success but another app I'm building with Mule ESB doesn't accept the extra "100 continue" and fails.
The app otherwise handles the POST request without any problems, data is written to the database and so forth.
About the technical enviroment
The Express app is running on a SUSE-server. Nginx serves as a reverse proxy. I manage the multiple Express apps with PM2.
At first i was using a separate Express app, with a proxy package, to act as a reverse proxy. But i switched it out to using Nginx as the reverse proxy, thinking that might have been the issue. But that made no difference.
I have tried the exact same setup with the express reverse proxy on my local machine and that doesn't return "100 Continue" at all, only "200 OK".
I can't figure out why the exact same app returns different responses when it's running on local/server.
Example response
**REQUEST**
POST /<my-endpoint> HTTP/1.1
Host: api.<my-server>.se:443
User-Agent: AHC/1.0
Connection: keep-alive
Accept: */*
Content-Type: application/json; charset=UTF-8
Content-Length: 123
**RESPONSE**
HTTP/1.1 100 Continue
Strict-Transport-Security: max-age=94608000
Date: Thu, 09 May 2019 10:11:34 GMT
Set-Cookie: WASID_HAG=f88cc3f3525ed06b; path=/; domain=.<my-domain>.se
Set-Cookie: WAAK_HAG=41ef118d9c4608f158cfbdcb0140e652; path=/; domain=.<my-domain>.se; secure
Set-Cookie: UPD=6; path=/; domain=.<my-domain>.se
Set-Cookie: RGSC16972=1030; path=/; domain=.<my-domain>.se
HTTP/1.1 100 Continue
Strict-Transport-Security: max-age=94608000
Date: Thu, 09 May 2019 10:11:34 GMT
Set-Cookie: WASID_HAG=f88cc3f3525ed06b; path=/; domain=.<my-domain>.se
Set-Cookie: WAAK_HAG=41ef118d9c4608f158cfbdcb0140e652; path=/; domain=.<my-domain>.se; secure
Set-Cookie: UPD=6; path=/; domain=.<my-domain>.se
Set-Cookie: RGSC16972=1030; path=/; domain=.<my-domain>.se
HTTP/1.1 200 OK
Server: nginx/1.16.0
Date: Thu, 09 May 2019 10:11:34 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 227
Access-Control-Allow-Origin: *
X-DNS-Prefetch-Control: off
X-Frame-Options: SAMEORIGIN
Strict-Transport-Security: max-age=15552000; includeSubDomains
X-Download-Options: noopen
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
ETag: W/"e3-aMSDU6iiOn52HqHZuJKLaZNt38k"
Set-Cookie: WASID_HAG=f88cc3f3525ed06b; path=/; domain=.<my-domain>.se
Set-Cookie: WAAK_HAG=41ef118d9c4608f158cfbdcb0140e652; path=/; domain=.<my-domain>.se; secure
Set-Cookie: UPD=6; path=/; domain=.<my-domain>.se
Set-Cookie: RGSC16972=1030; path=/; domain=.<my-domain>.se
Cache-control: no-store
What could be causing the multiple "100 Continue" responses and how can i prevent them?
Since the app doesn't return any "100 Continue" responses at all on my local machine could there be something with the server/firewall that's causing this?
I'm seeing a situation where requests through Cloudfront have a different Cache-Control than my origin. I have Object Caching set to "Use Origin Cache Headers" and (I don't think this is relevant) Compress Objects Automatically set to "No"
I've found that if I change Object Caching to "Customize" and change the value around that does in fact change the headers returned from the CDN. That's okay and all... but I'm curious to know why with my existing settings this header isn't being passed through.
Thanks!
Compressed Request from Origin - shows Cache-Control of '31536000'
(05:34 PM) jsharpe#mbp:~ curl -I https://staging.testing.com/assets/application-0d5691ba401c3f5a305fda52745a831376545a605a6c16e50fc838fdaa567e57.css --compressed
HTTP/1.1 200 OK
Server: Cowboy
Date: Wed, 16 Aug 2017 21:34:22 GMT
Connection: keep-alive
Last-Modified: Wed, 16 Aug 2017 05:05:25 GMT
Content-Type: text/css
Cache-Control: public, max-age=31536000
Content-Encoding: gzip
Vary: Accept-Encoding, Origin
Content-Length: 33563
Via: 1.1 vegur
Compressed Request from CDN - shows Cache-Control of '86400'
(05:34 PM) jsharpe#mbp:~ curl -I https://staging-cdn.testing.com/assets/application-0d5691ba401c3f5a305fda52745a831376545a605a6c16e50fc838fdaa567e57.css --compressed
HTTP/1.1 200 OK
Content-Type: text/css
Content-Length: 33563
Connection: keep-alive
Server: Cowboy
Date: Wed, 16 Aug 2017 05:07:12 GMT
Last-Modified: Wed, 16 Aug 2017 05:05:25 GMT
Cache-Control: public, max-age=86400
Content-Encoding: gzip
Via: 1.1 vegur, 1.1 7d327ef7e21429ba6a44eb6374c976f3.cloudfront.net (CloudFront)
Vary: Accept-Encoding
Age: 59233
X-Cache: Hit from cloudfront
X-Amz-Cf-Id: TEqKbQ5ZYySY7m8rDft_MAlygEiam6gYvzrXBpS7D2DrBNbVUZ1y3Q==
I have website which is online. When I'am using it via browser everything is ok and this page is present in browser. When I'm using it as googlebot ( via webmastertools ) i'm getting error
HTTP/1.1 404 Not Found
Date: Mon, 19 Nov 2012 09:57:37 GMT
Server: Apache
X-Powered-By: PHP/5.2.17
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Set-Cookie: symfony=55240a0a341202d07fc96cbc1c1bcca5; path=/
Keep-Alive: timeout=2, max=200
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8
and rest of the html code...
same thing when I'm trying to validate it via wc3 validator.
Please help :( I tryied everything :(
website address is mojaczestochowa.pl
If more info is needed please let me know.
Try to check the pae with web-sniffer and set user agent to google.bot
Here is the exact query, which will simulate server's response to the GoogleBot crawler:
https://websniffer.cc/?url=http://mojaczestochowa.pl/&uak=9
I've found that some venues will only return photos if I use a signed in user instead of a client_id / client_secret. Is this intentional?
curl -i https://api.foursquare.com/v2/venues/4c36476d93db0f47f6cc1d92/photos?client_id=xxx\&client_secret=xxx\&group=venue\&v=20120304
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Cache-Control: no-cache, private, no-store
Content-Type: application/json; charset=utf-8
Date: Mon, 05 Mar 2012 00:28:34 GMT
Expires: Mon, 5 Mar 2012 00:28:34 GMT
Pragma: no-cache
Server: nginx/0.8.52
X-RateLimit-Limit: 5000
X-RateLimit-Remaining: 4999
Content-Length: 66
Connection: keep-alive
{"meta":{"code":200},"response":{"photos":{"count":0,"items":[]}}}
curl -i https://api.foursquare.com/v2/venues/4c36476d93db0f47f6cc1d92/photos?group=venue\&v=20120304\&oauth_token=xxx\&v=20120304
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Cache-Control: no-cache, private, no-store
Content-Type: application/json; charset=utf-8
Date: Mon, 05 Mar 2012 00:29:19 GMT
Expires: Mon, 5 Mar 2012 00:29:19 GMT
Pragma: no-cache
Server: nginx/0.8.52
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 1000
Content-Length: 15311
Connection: keep-alive
{"meta":{"code":200},"notifications":[{"type":"notificationTray","item":{"unreadCount":0}}],"response":{"photos":{"count":14,"items":[lots of images here]}}}
I want to fetch a photo to associate with a given place as a background process, not tied to the specific user. Is it intended that this API only functions correctly for signed in users?
Looks like there's a bug in userless access to /venues/photos. The team is investigating. The intended behavior is that userless access of that endpoint returns all public photos attached to that venue.
When I run the below curl command with --negotiate option I get the following error. Any idea why?
[Aug05 5:03am] pradeep#localhost:/tmp/pradeep> curl --negotiate -u : -k --verbose --head "http://something.domain.com/something/soething.action"
About to connect() to something.domain.com port 80 (#0)
Trying ip-address ... connected
Connected to something.domain.com (ip-address) port 80 (#0)
HEAD /something.action HTTP/1.1
User-Agent: curl/7.21.6 (i386-pc-solaris2.10) libcurl/7.21.6 OpenSSL/0.9.8j zlib/1.2.3
Host: something.domain.com
Accept: */*
< HTTP/1.1 401 Unauthorized
HTTP/1.1 401 Unauthorized
< Date: Fri, 05 Aug 2011 09:04:45 GMT
Date: Fri, 05 Aug 2011 09:04:45 GMT
< Server: Apache-Coyote/1.1
Server: Apache-Coyote/1.1
* gss_init_sec_context() failed: : KDC policy rejects requestWWW-Authenticate: Negotiate
WWW-Authenticate: Negotiate
< Set-Cookie: JSESSIONID=0E94E134D7401632EBB4D042B8934DCD; Path=/
Set-Cookie: JSESSIONID=0E94E134D7401632EBB4D042B8934DCD; Path=/
< Content-Type: text/plain
Content-Type: text/plain
* no chunk, no close, no size. Assume close to signal end
I am able to open the site normally from the browser etc. Why I am I not able to authenticate here? Can someone help me understand
Two things you can try:
Remove --head. You seem to want to send a GET request, not a HEAD request.
Don't forget to provide the credentials as with this example: -u pierre:secret