How to find the IP Address of incoming requests in couchbase logs? (Non REST API-Non Web Console requests) - node.js

I have gone through all the logs given in the below link but have not been able to find the IP Address:
https://developer.couchbase.com/documentation/server/3.x/admin/Misc/Trbl-logs.html
I am hitting the Couchbase bucket via code and not via REST APIs or the Web Console. Here is a piece of my code:
var couchbase = require("couchbase");
var userCouchbaseIp = subscriptionConf.CouchbaseIp;
var couchbaseBucketName = subscriptionConf.couchbaseBucketName;
var cluster = new couchbase.Cluster(userCouchbaseIp);
var Bucket = cluster.openBucket(couchbaseBucketName);
var Key = "abcd";
Bucket.get(Key, function(errGetKey, resGetKey) {
console.log("trial console");
if (errGetKey) {
console.log("errGetKey: ", errGetKey);
} else {
console.log("resGetKey: ", resGetKey);
}
});

The official SDK identify themselves using the HELO command when communicating with the Data Service. This is logged in the Data Services logs /opt/couchbase/var/log/memcached.log*.
This is what a log entry on 5.5-beta look like.
2018-04-27T10:14:32.031425Z INFO 47: HELO [{"a":"libcouchbase/2.8.6 (Darwin-17.3.0; x86_64; Clang 9.0.0.9000039)","i":"00000000ae092f72/26a9bd662aca89f5"}] TCP nodelay, XATTR, XERROR, Select bucket, Snappy, JSON [ 10.111.180.1:50149 - 10.111.180.101:11210 (not authenticated) ]
It will say the version of the SDK being used(libcouchbase/2.8.6), the connection (10.111.180.1:50149 - 10.111.180.101:11210) and the capabilities the SDK has(TCP nodelay, XATTR, XERROR, Select bucket, Snappy, JSON).

Related

How can we set Proxy setting for Provisioning of Azure IOT device

We are using this repo : https://github.com/Azure/azure-iot-sdk-node
We are trying to setup a DPS service for Azure Iot hub, we want to setup proxy for Provisioning through X509, In the Sample code : "register_x509.js"
We are using "var Transport = require('azure-iot-provisioning-device-mqtt').MqttWs;" library. In that, there is function call "setTransportOptions" and we sending our proxy agent as a permeant there :
var transport = new Transport();
transport.setTransportOptions({webSocketAgent:new HttpsProxyAgent(process.env.HTTP_PROXY)})
var securityClient = new X509Security(registrationId, deviceCert);
var deviceClient = ProvisioningDeviceClient.create(
provisioningHost,
idScope,
transport,
securityClient
);
// Register the device. Do not force a re-registration.
deviceClient.register(function (err, result) {
if (err) {
console.log("error registering device: " + err);
} else {
console.log("registration succeeded");
console.log("assigned hub=" + result.assignedHub);
console.log("deviceId=" + result.deviceId);
}
the initial tunneling is not happening due to which the connection is fialing. We also saw in documentation, that Azure SDK has a proxy filter which automatically take Proxy variable from environment, we tried that as well but still same issue. Can anyone please suggest a way for this use case.
Error we received : UnhandledPromiseRejectionWarning: Error: socket hang up

Google Cloud Storage get signedUrl from CDN npm

I am using a code as the following to create a signed Url for my content:
var storage = require('#google-cloud/storage')();
var myBucket = storage.bucket('my-bucket');
var file = myBucket.file('my-file');
//-
// Generate a URL that allows temporary access to download your file.
//-
var request = require('request');
var config = {
action: 'read',
expires: '03-17-2025'
};
file.getSignedUrl(config, function(err, url) {
if (err) {
console.error(err);
return;
}
// The file is now available to read from the URL.
});
This creates an Url that starts with https://storage.googleapis.com/my-bucket/
If I place that URL in the browser, it is readable.
However, i guess that URL is a direct access to the bucket file and is not passing through my configured CDN.
I see that in the docs (https://cloud.google.com/nodejs/docs/reference/storage/1.6.x/File#getSignedUrl) you can pass a cname option, which transforms the url to replace https://storage.googleapis.com/my-bucket/ to my bucket CDN.
HOWEVER when I copy the resulting URL, the sevice account or resulting url doesn't seem to have access to the resource.
I have added the firebase admin service account to the bucket but still I get no access.
Also, from the docs, the CDN signed url seems a lot different from the one signed through that API. Is it possible to create from the api a CDN signed url, or should i manually create it as explained in: https://cloud.google.com/cdn/docs/using-signed-urls?hl=en_US&_ga=2.131493069.-352689337.1519430995#configuring_google_cloud_storage_permissions?
For anyone interested in the node code for that signing:
var url = 'URL of the endpoint served by Cloud CDN';
var key_name = 'Name of the signing key added to the Google Cloud Storage bucket or service';
var key = 'Signing key as urlsafe base64 encoded string';
var expiration = Math.round(new Date().getTime()/1000) + 600; //ten minutes after, in seconds
var crypto = require("crypto");
var URLSafeBase64 = require('urlsafe-base64');
// Decode the URL safe base64 encode key
var decoded_key = URLSafeBase64.decode(key);
// buILD URL
var urlToSign = url
+ (url.indexOf('?') > -1 ? "&" : "?")
+ "Expires=" + expiration
+ "&KeyName=" + key_name;
//Sign the url using the key and url safe base64 encode the signature
var hmac = crypto.createHmac('sha1', decoded_key);
var signature = hmac.update(urlToSign).digest();
var encoded_signature = URLSafeBase64.encode(signature);
//Concatenate the URL and encoded signature
urlToSign += "&Signature=" + encoded_signature;
The Cloud CDN content delivery network works with HTTP(S) load balancing to deliver content to your users. Are you using HTTPS Load Balancer to deliver content to your users?
You can see this attached document[1] on using Google Cloud CDN and HTTP(S) load balancing and inserting content into the cache.
[1] https://cloud.google.com/cdn/docs/overview
[2] https://cloud.google.com/cdn/docs/concepts
What error code are you getting? Can you use the curl command and send the output with the error code for further analysis.
Could you confirm that configuration you have done meets the requirement of cacheability, as not all the HTTP response are cacheable? Google Cloud CDN caches only those responses that satisfy specific conditions [3], please confirm. Upon confirmation, I will do further investigation and advise you accordingly.
[3] Cacheability: https://cloud.google.com/cdn/docs/caching#cacheability
Could you provide me the output of this two command below, which will help me to verify if there is a permission issue on these objects? These commands will dump all the current permission settings on the object.
gsutil acl get gs://[full_path_to_file_to_be_cached]
gsutil ls -L gs://[full_path_to_file_to_be_cached]
For more details on permission, refer to this GCP documentation [4]
[4] Setting bucket permissions: https://cloud.google.com/storage/docs/cloud-console#_bucketpermission
No, it is not possible to create from the API a CDN signed URL
From what Google documents here. The answer provided by #htafoya seem legit.
However, I spent a couple of hours to struggle why the signed URL not working as CDN endpoint complains access denied. Eventually I found the code using crypto module doesn't produce the same hmac-sha1 hash value as what gcloud compute sign-url computed, I still don't know why.
At the same time, I see this lib (jsSHA) is pretty cool, it generates the HMAC-SHA1 hash value exactly the same as gcloud and it has a neat API, so I think I should comment here so that if others have the same struggle will benefit from this, this is the final code I used to sign gcloud cdn URL:
import jsSHA from 'jssha';
const url = `https://{domain}/{path}`;
const expire = Math.round(new Date().getTime() / 1000) + daySeconds;
const extendedUrl = `${url}${url.indexOf('?') > -1 ? "&" : "?"}Expires=${expire}&KeyName=${keyName}`;
// use jssha
const shaObj = new jsSHA("SHA-1", "TEXT", { hmacKey: { value: signKey, format: "B64" } });
shaObj.update(extendedUrl);
const signature = safeSign(shaObj.getHMAC('B64'));
return `${extendedUrl}&Signature=${signature}`;
working great!

Using Firebase parameters with Google Cloud Storage under node.js

There is no node.js Firebase Storage client at the moment (too bad...), so I'm turning to gcloud-node with the parameters found in Firebase's console.
I'm trying :
var firebase = require('firebase');
var gcloud = require('gcloud')({
keyFilename: process.env.FB_JSON_PATH,
projectId: process.env.FB_PROJECT_ID
});
firebase.initializeApp({
serviceAccount: process.env.FB_JSON_PATH,
databaseURL: process.env.FB_DATABASE_URL
});
var fb = firebase.database().ref();
var gcs = gcloud.storage();
var bucket = gcs.bucket(process.env.FB_PROJECT_ID);
bucket.exists(function(err, exists) {
console.log('err', err);
console.log('exists', exists);
});
Where :
FB_JSON_PATH is the path to the JSON file generated in order to use the Firebase Server SDK
FB_DATABASE_URL is something like https://app-a36e5.firebaseio.com/
FB_PROJECT_ID is the name of the firebase project in Google's console : "app-a36e5"
The id of the bucket is FB_PROJECT_ID (in Firebase's console the storage tab displays gs://app-a36e5.appspot.com)
When I run this code I get :
err null
exists false
But no other errors.
I'm expecting exists true at least.
Some additional info : I can query the database (so I imagine the JSON file is correct), and I have set the storage rules as follow :
service firebase.storage {
match /b/app-a36e5.appspot.com/o {
match /{allPaths=**} {
allow read: if true;
allow write: if request.auth != null;
}
}
}
So that everything on the storage is readable.
Any ideas how to get it to work ? Thank you.
The issue here is that you aren't naming your storage bucket correctly. The bucket initialization should be:
var bucket = gcs.bucket('app-a36e5.appspot.com'); // full name of the bucket includes the .appspot.com
I would assume that process.env.FB_PROJECT_ID is just the your-bucket part, and you'd need to get the full bucket name, not just the project id (though the bucket name may be process.env.FB_PROJECT_ID + '.appspot.com').
Also, sorry about not providing Storage integrated with Firebase--GCS has a high quality library that you've already found (gcloud-node), and we figured that this provides the best story for developers (Firebase for mobile, Google Cloud Platform for server side development), and didn't want to muddy the waters further.

Bluemix - object storage - node.js - pkgcloud - openstack returns 401

I am trying to use pkgcloud (node.js) openstack with bluemix object storage, but when I put all the requested parameters as on official page, it always returns 401. I tried using postman as described on bluemix and it works.
I created a package, which is able to to authorize it right. It is just a copy of pkgcloud, with a few fixes.
EDIT: IT IS WORKING! The V2 supports was shot down by bluemix and it has only V3 support now, but I once again find the issues.
Remember to use newest version (2.0.0)
So this is how you can use it now :
var pkgcloud = require('pkgcloud-bluemix-objectstorage');
// Create a config object
var config = {};
// Specify Openstack as the provider
config.provider = "openstack";
// Authentication url
config.authUrl = 'https://identity.open.softlayer.com/';
config.region= 'dallas';
// Use the service catalog
config.useServiceCatalog = true;
// true for applications running inside Bluemix, otherwise false
config.useInternal = false;
// projectId as provided in your Service Credentials
config.tenantId = 'xxx';
// userId as provided in your Service Credentials
config.userId = 'xxx';
// username as provided in your Service Credentials
config.username = 'xxx';
// password as provided in your Service Credentials
config.password = 'xxx';
// This is part which is NOT in original pkgcloud. This is how it works with newest version of bluemix and pkgcloud at 22.12.2015.
//In reality, anything you put in this config.auth will be send in body to server, so if you need change anything to make it work, you can. PS : Yes, these are the same credentials as you put to config before.
//I do not fill this automatically to make it transparent.
config.auth = {
forceUri : "https://identity.open.softlayer.com/v3/auth/tokens", //force uri to v3, usually you take the baseurl for authentication and add this to it /v3/auth/tokens (at least in bluemix)
interfaceName : "public", //use public for apps outside bluemix and internal for apps inside bluemix. There is also admin interface, I personally do not know, what it is for.
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"id": "***", //userId
"password": "***" //userPassword
}
}
},
"scope": {
"project": {
"id": "***" //projectId
}
}
};
console.log("config: " + JSON.stringify(config));
// Create a pkgcloud storage client
var storageClient = pkgcloud.storage.createClient(config);
// Authenticate to OpenStack
storageClient.auth(function (error) {
if (error) {
console.error("storageClient.auth() : error creating storage client: ", error);
}
else {
// Print the identity object which contains your Keystone token.
console.log("storageClient.auth() : created storage client: " + JSON.stringify(storageClient._identity));
}
});
PS : You should be able to connect to this service outside of bluemix, therefore you can test it on your localhost.
Lines below are for old content for version 1.2.3, read only if you want to use v2 version of pkgcloud which was working with bluemix before January 2016
EDIT: It looks like that bluemix shut down support for v2 openstack and only supports v3, which is not supported by pkgcloud at all. So this does not work anymore (at least for me).
The problem is actually between pkgcloud and bluemix authorization process. Bluemix is expecting a little diffent authorization. I created a package, which is able to to authorize it right. It is just a copy of pkgcloud, with a few fixes.
And this is how you can use it :
var pkgcloud = require('pkgcloud-bluemix-objectstorage');
// Create a config object
var config = {};
// Specify Openstack as the provider
config.provider = "openstack";
// Authentication url
config.authUrl = 'https://identity.open.softlayer.com/';
config.region= 'dallas';
// Use the service catalog
config.useServiceCatalog = true;
// true for applications running inside Bluemix, otherwise false
config.useInternal = false;
// projectId as provided in your Service Credentials
config.tenantId = 'xxx';
// userId as provided in your Service Credentials
config.userId = 'xxx';
// username as provided in your Service Credentials
config.username = 'xxx';
// password as provided in your Service Credentials
config.password = 'xxx';
// This is part which is NOT in original pkgcloud. This is how it works with newest version of bluemix and pkgcloud at 22.12.2015.
//In reality, anything you put in this config.auth will be send in body to server, so if you need change anything to make it work, you can. PS : Yes, these are the same credentials as you put to config before.
//I do not fill this automatically to make it transparent.
config.auth = {
tenantId: "xxx", //projectId
passwordCredentials: {
userId: "xxx", //userId
password: "xxx" //password
}
};
console.log("config: " + JSON.stringify(config));
// Create a pkgcloud storage client
var storageClient = pkgcloud.storage.createClient(config);
// Authenticate to OpenStack
storageClient.auth(function (error) {
if (error) {
console.error("storageClient.auth() : error creating storage client: ", error);
}
else {
// Print the identity object which contains your Keystone token.
console.log("storageClient.auth() : created storage client: " + JSON.stringify(storageClient._identity));
}
});

Create and Update Named Caches in Azure Managed Cache using Management API

I am attempting to create an Azure Managed Cache using PowerShell and the Azure Management API, this two pronged approach is required because the Offical Azure PowerShell Cmdlets only have very limited support for Creation and Update of Azure Managed Cache. There is however an established pattern for calling the Azure Management API from PowerShell.
My attempts at finding the correct API to call have been somewhat hampered by limited documentation on the Azure Managed Cache API. However after working my way through the cmdlets using both the source code and the -Debug option in PowerShell I have been able to find what appear to be the correct API endpoints, as such I have developed some code to access these endpoints.
However, I have become stuck after the PUT request has been accepted to the Azure API as subsequent calls to the Management API /operations endpoint show that the result of this Operation was Internal Server Error.
I have been using Joseph Alabarhari's LinqPad to explore the API as it allows me to rapidly itterate on a solution using the minimum possible code, so to execute the following code snippets you will need both LinqPad and the following extension in your My Extensions script:
public static X509Certificate2 GetCertificate(this StoreLocation storeLocation, string thumbprint) {
var certificateStore = new X509Store(StoreName.My, storeLocation);
certificateStore.Open(OpenFlags.ReadOnly);
var certificates = certificateStore.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false);
return certificates[0];
}
The complete source code including the includes are available below:
My Extensions - you can replace an "My Extensions" by right clicking My Extensions in the bottom left hand pane and choosing "Open Script Location in Windows Explorer" then replacing the highlighted file with this one. Alternatively you may wish to merge my extensions into your own.
Azure Managed Cache Script - you should simply be able to download and double click this, once open and the above extensions and certificates are in place you will be able to execute the script.
The following settings are used throughout the script, the following variables will need to it for anyone who is following along using their own Azure Subscription ID and Management Certificate:
var cacheName = "amc551aee";
var subscriptionId = "{{YOUR_SUBSCRIPTION_ID}}";
var certThumbprint = "{{YOUR_MANAGEMENT_CERTIFICATE_THUMBPRINT}}";
var endpoint = "management.core.windows.net";
var putPayloadXml = #"{{PATH_TO_PUT_PAYLOAD}}\cloudService.xml"
First I have done some setup on the HttpClient:
var handler = new WebRequestHandler();
handler.ClientCertificateOptions = ClientCertificateOption.Manual;
handler.ClientCertificates.Add(StoreLocation.CurrentUser.GetCertificate(certThumbprint));
var client = new HttpClient(handler);
client.DefaultRequestHeaders.Add("x-ms-version", "2012-08-01");
This configures HttpClient to both use a Client Certificate and the x-ms-version header, the first call to the API fetches the existing CloudService that contains the Azure Managed Cache. Please note this is using an otherwise empty Azure Subscription.
var getResult = client.GetAsync("https://" + endpoint + "/" + subscriptionId + "/CloudServices");
getResult.Result.Dump("GET " + getResult.Result.RequestMessage.RequestUri);
This request is successful as it returns StatusCode: 200, ReasonPhrase: 'OK', I then parse some key information out of the request: the CloudService Name, the Cache Name and the Cache ETag:
var cacheDataReader = new XmlTextReader(getResult.Result.Content.ReadAsStreamAsync().Result);
var cacheData = XDocument.Load(cacheDataReader);
var ns = cacheData.Root.GetDefaultNamespace();
var nsManager = new XmlNamespaceManager(cacheDataReader.NameTable);
nsManager.AddNamespace("wa", "http://schemas.microsoft.com/windowsazure");
var cloudServices = cacheData.Root.Elements(ns + "CloudService");
var serviceName = String.Empty;
var ETag = String.Empty;
foreach (var cloudService in cloudServices) {
if (cloudService.XPathSelectElements("//wa:CloudService/wa:Resources/wa:Resource/wa:Name", nsManager).Select(x => x.Value).Contains(cacheName)) {
serviceName = cloudService.XPathSelectElement("//wa:CloudService/wa:Name", nsManager).Value;
ETag = cloudService.XPathSelectElement("//wa:CloudService/wa:Resources/wa:Resource/wa:ETag", nsManager).Value;
}
}
I have pre-created a XML file that contains the payload of the following PUT request:
<Resource xmlns="http://schemas.microsoft.com/windowsazure">
<IntrinsicSettings>
<CacheServiceInput xmlns="">
<SkuType>Standard</SkuType>
<Location>North Europe</Location>
<SkuCount>1</SkuCount>
<ServiceVersion>1.3.0</ServiceVersion>
<ObjectSizeInBytes>1024</ObjectSizeInBytes>
<NamedCaches>
<NamedCache>
<CacheName>default</CacheName>
<NotificationsEnabled>false</NotificationsEnabled>
<HighAvailabilityEnabled>false</HighAvailabilityEnabled>
<EvictionPolicy>LeastRecentlyUsed</EvictionPolicy>
</NamedCache>
<NamedCache>
<CacheName>richard</CacheName>
<NotificationsEnabled>true</NotificationsEnabled>
<HighAvailabilityEnabled>true</HighAvailabilityEnabled>
<EvictionPolicy>LeastRecentlyUsed</EvictionPolicy>
</NamedCache>
</NamedCaches>
</CacheServiceInput>
</IntrinsicSettings>
</Resource>
I construcuct a HttpRequestMessage with the above Payload and a URL comprised of the CloudService and Cache Names:
var resourceUrl = "https://" + endpoint + "/" + subscriptionId + "/cloudservices/" + serviceName + "/resources/cacheservice/Caching/" + cacheName;
var data = File.ReadAllText(putPayloadXml);
XDocument.Parse(data).Dump("Payload");
var message = new HttpRequestMessage(HttpMethod.Put, resourceUrl);
message.Headers.TryAddWithoutValidation("If-Match", ETag);
message.Content = new StringContent(data, Encoding.UTF8, "application/xml");
var putResult = client.SendAsync(message);
putResult.Result.Dump("PUT " + putResult.Result.RequestMessage.RequestUri);
putResult.Result.Content.ReadAsStringAsync().Result.Dump("Content " + putResult.Result.RequestMessage.RequestUri);
This request is nominally accepted by the Azure Service Management API as it returns a StatusCode: 202, ReasonPhrase: 'Accepted' response; this essentially means that the payload has been accepted and will be processed offline, the Operation ID can be parsed out of the HTTP Header to retreve further information:
var requestId = putResult.Result.Headers.GetValues("x-ms-request-id").FirstOrDefault();
This requestId can be used to request an update upon the status of the operation:
var operation = client.GetAsync("https://" + endpoint + "/" + subscriptionId + "/operations/" + requestId);
operation.Result.Dump(requestId);
XDocument.Load(operation.Result.Content.ReadAsStreamAsync().Result).Dump("Operation " + requestId);
The request to the /operations endpoint results in the following payload:
<Operation xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<ID>5364614d-4d82-0f14-be41-175b3b85b480</ID>
<Status>Failed</Status>
<HttpStatusCode>500</HttpStatusCode>
<Error>
<Code>InternalError</Code>
<Message>The server encountered an internal error. Please retry the request.</Message>
</Error>
</Operation>
And this is where I am stuck, the chances are I am subtly malforming the request in such a way that the underlying request is throwing a 500 Internal Server Error, however without a more detailed error message or API documentation I don't think there is anywhere I can go with this.
We worked with Richard offline and the following XML payload got him un-blocked.
Note - When adding/removing named cache to an existing cache, the object size is fixed.
Note 2- The Azure Managed Cache API is sensitive to whitespace between the element and the element.
Also please note, we are working on adding Named cache capability to our PowerShell itself, so folks don't have to use APIs to do so.
<Resource xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<IntrinsicSettings><CacheServiceInput xmlns="" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<SkuType>Standard</SkuType>
<Location>North Europe</Location>
<SkuCount>1</SkuCount>
<ServiceVersion>1.3.0</ServiceVersion>
<ObjectSizeInBytes>1024</ObjectSizeInBytes>
<NamedCaches>
<NamedCache>
<CacheName>default</CacheName>
<NotificationsEnabled>false</NotificationsEnabled>
<HighAvailabilityEnabled>false</HighAvailabilityEnabled>
<EvictionPolicy>LeastRecentlyUsed</EvictionPolicy>
<ExpirationSettings>
<TimeToLiveInMinutes>10</TimeToLiveInMinutes>
<Type>Absolute</Type>
</ExpirationSettings>
</NamedCache>
<NamedCache>
<CacheName>richard</CacheName>
<NotificationsEnabled>false</NotificationsEnabled>
<HighAvailabilityEnabled>false</HighAvailabilityEnabled>
<EvictionPolicy>LeastRecentlyUsed</EvictionPolicy>
<ExpirationSettings>
<TimeToLiveInMinutes>10</TimeToLiveInMinutes>
<Type>Absolute</Type>
</ExpirationSettings>
</NamedCache>
</NamedCaches>
</CacheServiceInput>
</IntrinsicSettings>
</Resource>

Resources