Docker Remote API & Binds - node.js

I'm trying to use Docker's Remote API via nodejs docker.io library but I just can't find the right syntax how to bind directories.
I'm currently using this code:
docker.containers.start(cId, { Binds: ['/tmp:/tmp'] }, function(err, container)...
It starts container but when I inspect it doesn't show anything in Volumes.
Docker's Remote API documentation is lacking when it comes to syntax so I'm hoping somebody here knows the correct syntax.

I finally got it working. It seems you also need to create Volumes when you create the container. Here's the proper syntax:
the first API call to /container/create should include:
{
"Volumes": { "/container/path": {} }
}
Then when starting a container (POST /containers//start), use the "Binds" option:
{
"Binds": [ "/host/path:/container/path:rw" ]
}
source: https://groups.google.com/d/msg/docker-club/GrFQ3F1rqU4/3ZC5QoNkSAAJ

Related

How do I start a Google Cloud instance with a container image from a Node.JS client?

I want to start a vm instance with a container image from within a Google Cloud Function in Node.JS.
I can't figure out how to call the createVM function with a container image specified.
const [vm, operation] = await zone.createVM(vmName, {os: 'ubuntu'});
I don't see it anywhere in the documentation
https://googleapis.dev/nodejs/compute/latest/index.html
When creating the instance in the Google Cloud console, I was able to copy the equivalent REST command, take the JSON and paste it into the Google Cloud Compute Node.js SDK config.
const Compute = require('#google-cloud/compute');
// Creates a client
const compute = new Compute();
// Create a new VM using the latest OS image of your choice.
const zone = compute.zone('us-east1-d');
// The above object will auto-expand behind the scenes to something like the
// following. The Debian version may be different when you run the command.
//-
const config =
{
"kind": "compute#instance",
"name": "server",
"zone": "projects/projectName/zones/us-east1-d",
"machineType": "projects/projectName/zones/us-east1-d/machineTypes/f1-micro",
"displayDevice": {
"enableDisplay": false
},
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "gce-container-declaration",
"value": "spec:\n containers:\n - name: game-server\n image: gcr.io/projectName/imageName\n stdin: false\n tty: false\n restartPolicy: Never\n\n# This container declaration format is not public API and may change without notice. Please\n# use gcloud command-line tool or Google Cloud Console to run Containers on Google Compute Engine."
},
{
"key": "google-logging-enabled",
"value": "true"
}
]
},
"tags": {
"items": [
"https-server"
]
},
"disks": [
{
... //Copied from Google Cloud console -> Compute Engine -> Create VM Instance -> copy equivalent REST command (at the bottom of the page)
]
};
//-
// If the callback is omitted, we'll return a Promise.
//-
zone.createVM('new-vm-name', config).then(function(data) {
const vm = data[0];
const operation = data[1];
const apiResponse = data[2];
res.status(200).send(apiResponse);
});
What I understand you want to end up with is a new GCP Compute Engine instance running the Container Optimized OS (COS) that runs Docker that creates a container instance from a repository hosted container image. To achieve this programatically, you are also wanting to use the Node.JS API.
The trick (for me) is to create an instance of the Compute Engine manually through the GCP Cloud Console. Once done, we can then login to the instance and retrieve the raw metadata by running:
wget --output-document=- --header="Metadata-Flavor: Google" --quiet http://metadata.google.internal/computeMetadata/v1/?recursive=true
What we get back is a JSON representation of that metadata. From here, we find that our actual goal in creating our desired Compute Engine through API is to create that Compute Engine using the standard API and then also define the metadata needed. It appears that the Container Optimized OS simply has a script/program which reads the metadata and uses that to run Docker.
When I examined data for a Container running in a Compute Engine, I found an attribute called:
attributes.gce-container-declaration
That contained:
"spec:\n containers:\n - name: instance-1\n image: nodered/node-red\n stdin: false\n tty: false\n restartPolicy: Always\n\n# This container declaration format is not public API and may change without notice. Please\n# use gcloud command-line tool or Google Cloud Console to run Containers on Google Compute Engine."
which is YAML and if we format it prettily we find:
spec:
containers:
- name: instance-1
image: nodered/node-red
stdin: false
tty: false
restartPolicy: Always
# This container declaration format is not public API and may change without notice. Please
# use gcloud command-line tool or Google Cloud Console to run Containers on Google Compute Engine.
And there we have it. To create a GCP Compute Engine hosting a container image, we would create a container image running the Container Optimized OS (eg. "image":"projects/cos-cloud/global/images/cos-stable-77-12371-114-0") and set the metadata to define the container to run.

Not able to connect/call services of other nodes Moleculer NodeJs

I have created 2 nodes for moleculer using
npm init project project_name
I have added a service users.list in project one which gives list of all users which is working fine also i exposed its api.
But issue is, when i run the other node project2, and in action of service i call user.list it shows SERVICE_NOT_FOUNT. However it is calling its own functions but not the functions of other nodes
I want to connect different nodes so that i can call services of one node in other, i don't know what i am missing or doing wrong, because i followed documentation of moleculer which says it should work like that, but its not working
I am using REDIS as transporter.
Here is code for action
welcome: {
params: {
name: "string"
},
async handler(ctx) {
var tmp = await ctx.call("users.list",{});
return `Welcome, ${tmp}`;
}
}

How to run bash script in an already created/exisitng VM in GCP using NodeJS?

I have gone through, Nodejs-GCP-Compute-Github doc and used the sample code to create a new VM and list existing VM using NodeJS and Npm module.
Now I want to connect to my existing VM and run a small bash script to invoke a few commands mostly git clone or curl to run files in VM.
I couldn't find a method in #google-cloud/compute to connect to the exisitng VMs and do some stuff.
Do we have any such method?
Is it possible to do this in some other way using Nodejs?
There are two different methods coming to my mind:
You could add your public key to the instance, and then connect to it via ssh using a node ssh library (https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys)
Set a startup script for the instance when you are creating it. This can be done by setting the second parameter (config) of createVM to something like:
{
os: 'ubunntu',
metadata: {
'startup-script': 'your commands'
}
}

Google Cloud Compute Engine API: createVM directly with setMetadata

I use #google-cloud/compute to create VM instances automatically.
Also I use startup scripts in those instances.
So, firstly I call Zone.createVM and then VM.setMetadata.
But in some regions startup script is not running. And it is running after VM reset, so looks like my VM.setMetadata call is just too late.
In the web-interface we can create VM directly with metadata. But I do not see this ability in API.
Can it be done with API?
To set up a startup script during instance deployment you can provide it as part of the metadata property in the API call:
POST https://www.googleapis.com/compute/v1/projects/myproject/zones/us-central1-a/instances
{
...
"metadata": {
"items": [
{
"key": "startup-script",
"value": "#! /bin/bash\n\n# Installs apache and a custom homepage\napt-get update\napt-get install -y apache2\ncat <<EOF > /var/www/html/index.html\n<html><body><h1>Hello World</h1>\n<p>This page was created from a simple start up script!</p>\n</body></html>"
}
]
}
...
}
See the full reference for the resource "compute.instances" of the Compute Engine API here.
Basically, if you are using a Nodejs library to create the instance you are already calling this, so you will only need to add the metadata keys as documented.
Also, if you are doing this frequently I guess it would be more practical if you stored the script in a bucket in GCP and simply add the URI to the metadata like this:
"metadata": {
"items": [
{
"key": "startup-script-url",
"value": "gs://bucket/myfile"
}
]
},

How configure Solr to only accept request by sending keypass/access-token

Solr experts,
at the moment i am using a custom proxy script to only accept requests with the right keypass-parameter. Is it possible to configure solr for such a use case, so i do not need this proxy script?
for example: localhost/proxy/search?keypass=asdaefva&query=SEARCHPARAMETERS
best regards
Tim
If you have a recent enough Solr version, you can use Solr's built in support for Authentication and Authorization. This also allows you to limit the collections and operations that a given key (i.e. user:pass) can access.
These are configured in a file named security.json, which is either store in Zookeeper (for SolrCloud), or locally on disk (using a local file in stand alone mode support was added later than the original support for using it in cluster mode).
{
"authentication":{
"class":"solr.BasicAuthPlugin",
"credentials":{
"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0= Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="
}
},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[
{
"name":"security-edit",
"role":"admin"
}
],
"user-role":{
"solr":"admin"
}
}
}
}
When running Solr in standalone mode, you need to create the security.json file and put it in the $SOLR_HOME directory for your installation (this is the same place you have located solr.xml and is usually server/solr).

Resources