Case 1:
My server (accessing remotely does not have internet access) having the connectivity to the remote server in 443 port. Using web service URL, I need to send the web service request and receive the response. I am able to send the request using but unable to receive the response from remote server.
code:
Here is my code which i am using to send and receive the https request using the lwp agent in perl
use Data::Dumper;
use LWP::UserAgent;
use HTTP::Status;
use HTTP::Response;
use HTTP::Request::Common;
$ENV{PERL_LWP_SSL_VERIFY_HOSTNAME} = 0;
$LWPUserAgent = new LWP::UserAgent( 'timeout' => '20');
$LWPUserAgent->ssl_opts('verify_hostname' => 0) ;
$WEB_URL="https://webserviceurl.com/Request?Arg1|Arg2|Arg3|Arg4";
$Response = $LWPUserAgent->get($WEB_URL);
print Dumper $Response ;
I printed the response using Data::Dumper and getting below response.
$VAR1 = bless( {
'_content' => 'Status read failed: at /usr/share/perl5/Net/HTTP/Methods.pm line 269.',
'_rc' => 500,
'_headers' => bless( {
'client-warning' => 'Internal response',
'client-date' => 'Tue, 13 Oct 2015 15:13:21 GMT',
'content-type' => 'text/plain'
}, 'HTTP::Headers' ),
'_msg' => 'Status read failed: ',
'_request' => bless( {
'_content' => '',
'_uri' => bless( do{\(my $o = 'https://webserviceurl.com/Request?Arg1%7Arg2%7Arg3%7Arg4')}, 'URI::https' ),
'_headers' => bless( {
'user-agent' => 'libwww-perl/6.04'
}, 'HTTP::Headers' ),
'_method' => 'GET'
}, 'HTTP::Request' )
}, 'HTTP::Response' );
I searched more about this in google and i am unable to found any idea about this.
My server information are :
OS - wheezy 7.2 64bit.
perl 5, version 14, subversion 2 (v5.14.2) built for x86_64-linux-gnu-thread-multi
LWP::UserAgent - 6.04
HTTP::Response,HTTP::Status,HTTP::Request::Common versions are - 6.03.
Case 2: My server (in home and internet access) having the connectivity using my static ip of the internet connection. Using my proxy trying to run the above code with below piece of code.
$LWPUserAgent->proxy('https', 'http://192.168.12.10:3128') ;
I am able to send and receive the https requests using LWP agent and working fine.
My server information are:
OS - squeeze (6.0.6) 32 bit
perl, v5.10.1 (*) built for i486-linux-gnu-thread-multi
LWP::UserAgent - 6.13
HTTP::Response - 5.836
HTTP::Status - 5.817
HTTP::Request::Common - 5.824
I confused of the these things.
1.OS problem
2.Package versions problem
3.whether is it a bug in wheezy
If any one can provide me the correct direction to resolve this it would be highly appreciated.
Please set $ENV{HTTPS_DEBUG} = 1; and write here what the script prints.
Related
UPDATE:
So this works as expected if I add the following:
const options = {
...standardStuff,
family: 6,
}
Adding the family: 6 option means it works as expected. So I supposed my question changes to why?? From the docs it states:
IP address family to use when resolving host or hostname.
Valid values are 4 or 6. When unspecified, both IP v4 and v6 will be used.
Whixh would leave me to conclude I wouldnt need to as IPV6 is being used anyway. And why would curl etc not matter?
Have a zone lockdown rule on cloudflare for a cname we have registered with my IPV6 address added to the white list. I got this from googling whatsmyip.
I also added my companies VPN ip address with is in the ipv4 format.
When I curl this endpoint I receive the expeted 200 - however when I run a request via nodejs I receive a 403.
This is even stranger as I am able to access the endpoint via golang, insomnia, curl and I am also able to access it via nodejs when I am connected to an ipv4 network - e.g VPN or if I tether my phone to my laptop.
curl https://my-restricted-endpoint.com
# 200
package main
import (
"fmt"
"log"
"net/http"
)
func main() {
resp, err := http.Get("https://my-restriced-endpoint.com")
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.StatusCode)
// 200
}
const requestAsync = (options: RequestOptions | string | URL): Promise<number> => {
return new Promise((resolve, reject) => {
const req = request(options, (res) => {
if (typeof res.statusCode !== 'number') {
reject('no status code returned');
return;
}
resolve(res.statusCode);
});
req.on('error', (error) => {
reject(error);
});
req.on('timeout', () => {
reject('request timed out');
});
req.end();
});
};
const statusCode = await requestAsync('https://my-restricted-endpoint.com')
// returns 200 on VPN or thetherd to phone with an ipv4 ip address
// returns 403 otherwise
My knowlege of netowkring and IPV4/6 is limited to nonexistent - but feel like this is the cause with something nodejs is doing with the request.
I have also tried using axios
So if I only whitelist my ipv6 address then I need to force nodejs to resolve the hostname to ipv6 by setting {family: 6}.
If I add both my ipv6 and ipv4 to whitelist then I can leave that option alone.
It seems that golang, curl, and insomnia in which I was using implement RFC6555 'Happy Eyeballs' which means that ipv6 will be used first and only on failure would ipv4 be used. It was why these worked and nodejs did not. From what I can gather nodejs does not implement this which means that due no whitelisting my ipv4 address on cloudflare it would fail.
I am working on Walmart Api Integration, using Laravel 7. I had installed the GuzzleHttp too. I used DigitalSignatureUtil.jar to generate WM_SEC.AUTH_SIGNATURE and WM_SEC.TIMESTAMP. It works fine to fetch data in json at first time. The following is the code.
$client = new GuzzleHttp\Client();
$res = $client->request('GET', 'https://marketplace.walmartapis.com/v3/feeds', [
'headers' => [
'WM_SVC.NAME' => 'walmart market place',
'WM_CONSUMER.ID' => '#########',
'WM_QOS.CORRELATION_ID' => '########',
'WM_CONSUMER.CHANNEL.TYPE' => '######',
'WM_SEC.AUTH_SIGNATURE' => '#######',
'WM_SEC.TIMESTAMP' => '1596290047006',
'Content-Type' => 'application/json',
'Accept' => 'application/json',
]
]);
$products = json_decode((string) $res->getBody(), true);
return view('product', compact('products'));
NOTE: But it gives errors if i use the code next day, or after few minutes. I get following error
GuzzleHttp\Exception\ClientException
Client error: `GET https://marketplace.walmartapis.com/v3/feeds` resulted in a `401 Unauthorized`
response: {"error": [{"code":"UNAUTHORIZED.GMP_GATEWAY_API",
"field":"UNAUTHORIZED","description":"Unauthorized","info":"Unauthorize (truncated...)
Please help me what should i do to get rid from this?
hope someone out there can help me on this one!
Task:
Send xml files to ActiveMQ.
Environments:
Developing:
OS X 10.10.5
node 4.4.3
stompit 0.25.0
Production:
Ubuntu 16.04
node 7.8.0 (tried 4.4.3 too with same results)
stompit 0.25.0
I'm always connecting this way.
var server1 = { 'host': 'activemq-1.server.lan' };
var server2 = { 'host': 'activemq-2.server.lan' };
var servers = [server1, server2];
var reconnectOptions = { 'maxReconnects': 10 };
var manager = new stompit.ConnectFailover(servers, reconnectOptions);
Headers, i set for each frame:
const sendHeaders = {
'destination' : '/queue/name_of_queue',
'content-type' : 'text/plain',
'persistent' : 'true'
};
I'm not allowed to set the content-length header, as this would force the server to interpret the stream as a binary stream.
When connected to the server, i connect to a PostgreSQL server to fetch the data to send.
What works:
var frame = client.send(sendHeaders);
frame.write(row.pim_content);
frame.end();
But it only works at the developing machine. When running this in production environment, the script runs without throwing errors, but never sends the message to the server.
So I tried a different method, just to have a callback when the server receives a message.
var channel = new stompit.Channel(manager);
channel.send(sendHeaders, xml_content, (err)=>{
if(err){
console.log(err);
} else {
console.log('Message successfully transferred');
}
});
Now i get the same results for production and development. It is working as expected, but ...
It only works as long as the body (xml_content) has a maximum length of 1352 characters. When adding an additional character, the callback of channel.send() is never going to be fired.
I'm running out of ideas what to check/test next to get that thing working. I hope someone is reading this, laughing and pointing me to the right direction. Any ideas greatly appreciated!
Thanks in advance,
Stefan
I am exploring Logstash to receive inputs on HTTP. I have installed http plugin using:
plugin install logstash-input-http
The installation was successfull. Then I tried to run logstash using following command:
logstash -e 'input {http {port => 8900}} output {stdout{codec => rubydebug}}'
But logstash terminates without giving any error as such.
Don't know how to verify whether plugin is installed correctly or not. And how to utilize the http plugin to test a sample request.
Thanks in Advance!
I was able to solve the problem by using the .conf file instead of command line arguments.
I created a http-pipeline.conf file similar to below:
input {
http {
host => "0.0.0.0"
port => "8080"
}
}
output {
stdout {}
}
And then executed Logstash like:
logstash -f http-pipeline.conf
Using a POSTMAN tool, I sent a POST request(http://localhost:8080) to Logstash with a sample string and voila it appeared on the Logstash console.
If you are executing from the same domain following will be sufficient.
input {
http {
port => 5043
}
}
output {
file {
path => "/log_streaming/my_app/app.log"
}
}
If you want to executing a request on a different domain of the website then you need to set few response headers
input {
http {
port => 5043
response_headers => {
"Access-Control-Allow-Origin" => "*"
"Content-Type" => "text/plain"
"Access-Control-Allow-Headers" => "Origin, X-Requested-With, Content-Type,
Accept"
}
}
}
output {
file {
path => "/log_streaming/my_app/app.log"
}
}
I'm trying to set up a PHP SoapClient to connect to a wsdl...
CURL & WGET from the server work fine.
If I try to use soapclient I receive the error messages below.
$wsdl = 'http://pav3.cdyne.com/PavService.svc?wsdl';
try {
$client = new SoapClient($wsdl, array('trace' => true, 'exceptions' => true));
} catch (SoapFault $f) {
echo $client->_getLastRequest();
echo $client->_getLastResponse();
echo $f->getMessage();
} catch (Exception $e) {
echo $client->_getLastRequest();
echo $client->_getLastResponse();
echo $e->getMessage();
}
I get the error message:
<br /><b>Warning</b>: SoapClient::SoapClient(http://pav3.cdyne.com/PavService.svc?wsdl) [<a href='soapclient.soapclient'>soapclient.soapclient</a>]: failed to open stream: HTTP request failed! in <b>/coachflex/www/htdocs/CoachFlex/modules/other/checkAddress.php</b> on line <b>35</b><br />
<br /><b>Warning</b>: SoapClient::SoapClient() [<a href='soapclient.soapclient'>soapclient.soapclient</a>]: I/O warning : failed to load external entity "http://pav3.cdyne.com/PavService.svc?wsdl" in <b>/coachflex/www/htdocs/CoachFlex/modules/other/checkAddress.php</b> on line <b>35</b><br />
If i try to simply use fopen on the above address, I get:
Warning: fopen(http://pav3.cdyne.com/PavService.svc?wsdl) [function.fopen]: failed to open stream: HTTP request failed! in /coachflex/www/htdocs/CoachFlex/modules/other/checkAddress.php on line 37
I just cannot figure out why I can connect via curl/wget, but not through PHP. allow_url_fopen is set to On
I solved my problem. It was actually an issue with my firewall. The firewall was dropping packets sent via PHP, but via curl or wget were not being dropped. I added a rule for all traffic from that server and increased the packet drop length and everything is working great now!
This page was what pointed me in the right direction: http://www.radiotope.com/content/safari-and-sonicwall