I playing around with vagrant to set up some droplets and deploy my nodejs server with ansible (I am using DigitalOcean). I have some parts in my js code where I need to set the current IP into the script. Problem is that I can't set the IP manually so I get a random IP via vagrant from DO. How can I "get" this IP and make use of it in my Ansible script? sure I just could do a wget http://ipinfo.io/ip -qO - on the host itself or check it with ip but I guess it should also work to get this info from Vagrant?
How can I "get" this IP and make use of it in my Ansible script?
Use ipify_facts:
- name: Get my public IP
ipify_facts:
- debug: var=ipify_public_ip
From the documentation (specifically: http://docs.ansible.com/ansible/playbooks_variables.html#information-discovered-from-systems-facts), Ansible seems to have a pre-defined variable containing your networking information:
"ansible_eth0": {
"active": true,
"device": "eth0",
"ipv4": {
"address": "REDACTED",
"netmask": "255.255.255.0",
"network": "REDACTED"
},
"ipv6": [
{
"address": "REDACTED",
"prefix": "64",
"scope": "link"
}
],
"macaddress": "REDACTED",
"module": "e1000",
"mtu": 1500,
"type": "ether"
},
I can't remember which is the public interface on DigitalOcean, but you should be able to use {{ ansible_eth0.ipv4.address }} in your playbook.
Side note, you can use this command to list all "discovered" variables:
ansible hostname -m setup
Related
We are developing locally using SAM Local to invoke a Lambda in an API Gateway. SAM Local does this using a Docker container (setup as close to the Lambda node runtime as possible). We want this Lambda to access some data in an API Mocking service in the shape of some Node express servers running in another container (this could also just be run locally too if needed). Both containers are in a user-created Docker bridge network created as follows:
docker network create sam-demo
The API mocking service is run and added to the bridge network:
docker run --network sam-demo --name mock -d -P mock:latest
The Lambda is invoked in debug mode and added to the bridge network:
sam local start-api -t template.json -d 9229 --docker-network sam-demo
Inspecting the bridge network reveals both the SAM local lambda (wizardly_knuth) and the mocks are there:
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"5ebfa4295a56e4df840676a2e214891543fd4e8cb271ed70ddd67946ab451119": {
"Name": "wizardly_knuth",
"EndpointID": "xxx",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
},
"d735c9aa840e4ce7180444cf168cd6b68451c9ca29ba87b7cb23edff11abea7b": {
"Name": "mock",
"EndpointID": "xxx",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Now, what should the URL be for the Lambda to hit the mock? According to Docker docs it should be the IPv4Address of the mock container i.e. http://172.20.0.2 but I am not sure what port or how to find what port?
I can exec into the mock and ping the SAM Local container successfully BUT I can't do the same from the SAM Local container as the shell doesn't have ping, curl, nc or anything installed.
I can't hit the Mock container directly from my machine as it is a Mac and I believe there are issues with doing so.
Any advice or next steps are greatly appreciated.
Much thanks,
Sam
UPDATE
In the end I gave up on this approach as I could not figure out what the URL for the Lambda should be to hit the mock within the Docker Bridge network.
The alternative approach was to actually just hit the mock Docker container directly from the Lambda using this URL (the mock container is exposing port 3002):-
http://docker.for.mac.localhost:3002/
Hope this might help somebody out.... please let me know if anyone solves the bridge network issue I originally posted about.
Thanks,
Sam
I am working on an Azure IoT Edge project. Currently I am going through the production readiness checklist. I followed the documentation to use storage on the host filesystem for the edgeAgent and edgeHub modules.
When I run sudo iotedge check edgeHub is OK but edgeAgent raises a warning:
‼ production readiness: Edge Agent's storage directory is persisted on the host filesystem - Warning
The edgeAgent module is not configured to persist its /tmp/edgeAgent directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see https://aka.ms/iotedge-storage-host for best practices.
√ production readiness: Edge Hub's storage directory is persisted on the host filesystem - OK
Here is a snippet from the deployment template:
"systemModules": {
"edgeAgent": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.0",
"createOptions": {
"HostConfig": {
"Binds": [
"/home/pi/iotedge/edgeAgent/storage/:/iotedge/storage/"
]
}
}
},
"env": {
"storageFolder": {
"value": "/iotedge/storage/"
}
}
},
"edgeHub": {
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
"createOptions": {
"HostConfig": {
"Binds": [
"/home/pi/iotedge/edgeHub/storage:/iotedge/storage/"
],
"PortBindings": {
"5671/tcp": [
{
"HostPort": "5671"
}
],
"8883/tcp": [
{
"HostPort": "8883"
}
],
"443/tcp": [
{
"HostPort": "443"
}
]
}
}
}
},
"env": {
"storageFolder": {
"value": "/iotedge/storage/"
}
}
}
},
As of release 1.0.9, there's an issue where edgeAgent's configuration doesn't update unless its image tag is updated. Two options from your current state:
Use a specific tag in the image settings (always recommended). E.g. mcr.microsoft.com/azureiotedge-agent:1.0.9
Delete the edgeAgent container on the device: docker rm -f edgeAgent. It will get restarted in under 30 secs and the new storageFolder env var will be picked up.
Run 'iotedge check' again after the container is update, and this warning should go away.
I have followed the same documentation and was able to avoid the production readiness checklist warnings on my Raspberry Pi 3.
1) I have configured the "Binds" as per the documentation Link module storage to device storage
"Binds":["/etc/iotedge/storage/:/iotedge/storage/"]
2) I have provided the user access on the HostStoragePath, from SSH terminal.
sudo chown 1000 /etc/iotedge/storage/
sudo chmod 700 /etc/iotedge/storage/
3) Restarted Raspberry Pi 3, to make sure the grant access takes into effect.
Make sure you have host storage folders available on your edge device.
Make sure to provide the user full access on those folders.
Try the deployment with your updated manifest, it should probably work.
I installed the json-server from https://github.com/typicode/json-server and works fine, I can execute GET, POST etc., but locally only. When I try to connect from the outside it doesn't work.
I tried to turn off firewall, change ports and diffrent startup settings (i.e json-server --host 192.168.0.21 db.json) and nothing helped. Here's my database.
[
{
"id": 2,
"login": "admin3",
"haslo": "haslo3"
},
{
"id": 3,
"login": "admin2",
"haslo": "haslo2"
},
{
"login": "admin1",
"haslo": "haslo1",
"id": 7
}
]
I would like to be able to connect to my server from the whole world, but only local adresses work (i.e http://192.168.0.21:3000/user or http://localhost:3000/user). What's wrong with my approach?
Make sure you are starting the server bound to your adapter (ex. --host 0.0.0.0).
Then you need to setup port forwarding on your router.
https://deaddesk.top/exposing-local-server-to-the-public-internet/
I have instaled node.js and npm on EC2 through command line and also uploaded all my files for Strongloop project
When I'm running the server locally it is working fine but when I'm running the node server.js on EC2 command line. it is running but saying:
Web server listening at: http://0.0.0.0:3001/
ENVIRONMENT : development
How could I start my server on AWS EC2... Can't figure it out.
My Config.json file
{
"restApiRoot": "/api",
"host": "0.0.0.0",
"port": 3001,
"remoting": {
"context": {
"enableHttpContext": false
},
"rest": {
"normalizeHttpPath": false,
"xml": false
},
"json": {
"strict": false,
"limit": "100kb"
},
"urlencoded": {
"extended": true,
"limit": "100kb"
},
"cors": false,
"errorHandler": {
"disableStackTrace": true
}
},
"legacyExplorer": false
}
Few possible things:
Make sure the port you are using is in your AWS Security Inbound Rules. Go to 'Security Groups', then select the group associated with your instance then click "Inbound". In your case, you have to add the port 3001 for HTTP protocol and the source 0.0.0.0/0.
You need to keep the service running even after you close the terminal/command-prompt window. To make the server start automatically and run in the background, you need to use something like 'systemctl' in ubuntu. Here is a step by step guide and tutorial: https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units
Now try visiting your IPv4 Public IP or your associated domain name with the port number. e.g. 1.2.3.4:3001 or mywebsite.com:3001. The IP is written in the same row as your instance in AWS.
I am new to logstash. I installed elasticsearch, kibana, logstash and logstash-forwarder in Ubuntu from this tutorial everything is fine while running on the local machine.
Now I want to include log file from another system so I installed logstash-forwarder on client machine but it fails to run. I cannot able to figure out mistake. In client machine I didn`t install logstash since logstash is running on server.If anything I misunderstood please let me know. What would be the configuration of server and client access.
Logstash forwarder in client:
{
"network": {
"servers": [ "server_ip_addr:5000" ],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/syslog",
"/var/log/auth.log"
],
"fields": { "type": "syslog" }
}
]
}
Thanks
Install logstash-forwarder is enough.
Here are the ways to troubleshooting your problem.
Check if logstash is running on port 5000
Login logstash server, and run :
telnet localhost 5000
If you can't telnet, then make sure, logstash service is running properly.
Check if there are firewall issue between clients and logstash server.
Login client (where you installed forwarder on it)
telnet server_ip_addr 5000
If you can't, you need open the firewall port 5000 between client to logstash server.
In config file, the file part is fine, could you update the network part,
Let me know the result
"network": {
"servers": [ "server_ip_addr:5000" ],
"timeout": 15,
"ssl certificate": "/etc/pki/tls/certs/logstash-forwarder.crt",
"ssl key": "/etc/pki/tls/certs/logstash-forwarder.key",
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},