I am trying to deploy ElasticSearch on Azure Cloud. Installed the Elastic template from Azure Marketplace and able to access in kibana by hitting this url http://ipaddress:5601 with user id and password given at the time of creation.
Also able to access elastic search http://ipaddress:9200/ and getting below configuration
{
"name" : "myesclient-1",
"cluster_name" : "myes",
"cluster_uuid" : "........",
"version" : {
"number" : "6.2.4",
"build_hash" : "ccec39f",
"build_date" : "",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Now i am facing problem in,
On which VM runs logstash?
How to start logstash?
Where to store the config files and jdbc config file and how to run BAT file periodically. Bat file syntax for normal VM is like
Run
cd C:\logstash\logstash-6.2.2\bin
logstash -f C:\Users\basudeb\Desktop\config\jdbc.config
pause
The Elastic Azure ARM template does not currently deploy Logstash, only Elasticsearch and Kibana. There's an open issue to track this. If you feel it would be useful, please +1 the issue :)
Related
I am trying to create a pipeline in Azure DevOps for my Nightwatch-Cucumber project. I have everything set, and when I run the tests locally everything is working fine, but when I run the tests in Azure DevOps I get an error. This is the error from the log that I get.
This are the tasks that I added
Can anyone help me with this error and how to make it work
Error connecting to localhost on port 4445
The possible cause of this issue is that port 4445 of the machine where the agent is located is not open.
Based on the error log, it seems that you are using the Microsoft-hosted agent(ubuntu agent).
You could try the following two methods:
1.You can try to change the connection port to 80. Based on my test, the port 80 is opened by default.
Here is an example:
nightwatch.json:
"test_settings" : {
"default" : {
"launch_url" : "http://localhost",
"selenium_port" : 80,
"selenium_host" : "hub.testingbot.com",
"silent": true,
"screenshots" : {
"enabled" : false,
"path" : ""
},
"skip_testcases_on_fail": false,
"desiredCapabilities": {
"javascriptEnabled": true,
"acceptSslCerts": true
}
},
2.Since this project could work fine on your local machine, the configuration should be correct on your local machine. So you could try to create a Self-hosted agent.
Then you could run the pipeline on your local machine.
I made it work. I switched to Ubuntu agent and installed chrome latest version and latest jdk. Also I had wrong chromedriver version installed, changed that in package.json file. Now its working fine. Thanks all for your answers.
I'm using PM2 to deploy my apps, so far it worked great, however now that I have sensitive API credentials I made my repos private and now I'm unable to deploy via PM2. I have ssh set up and can successfully connect to github via
ssh git#github.com
Here is my ecosystem.json file used for deployment (which works when in public mode):
"deploy" : {
"production" : {
"key" : "../.ssh/id_rsa.pem",
"user" : "root",
"host" : "xxx.xxx.xxx.xx",
"ref" : "origin/master",
"repo" : "git#github.com/AndreasGalster/productnews-graphql.git,
"path" : "/var/www/production",
"post-deploy" : "yarn install && pm2 startOrRestart ecosystem.json --env production"
}
}
Is it not possible to deploy a private repository? If so, how should I do it? I always get "could not read Username for 'https://github.com': No such device or address" Any ideas what I could do?
Following this: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-set-up-continuous-integration
But when I run the actual deploy, I get the following constructor error which is not very helpful.
==============================================================================
Task : Service Fabric Application Deployment
Description : Deploy a Service Fabric application to a cluster.
Version : 1.1.2
Author : Microsoft Corporation
Help : [More Information](https://go.microsoft.com/fwlink/?LinkId=820528)
==============================================================================
Searching for path: C:\a\r1\a\LONG_PATH\PublishProfiles\Dev.xml
Found path: C:\a\r1\a\LONG_PATH\PublishProfiles\Dev.xml
Searching for path: C:\a\r1\a\**\drop\applicationpackage
Found path: C:\a\r1\a\PATH\drop\applicationpackage
AAD Authority:
Cluster Application ID:
Client Application ID:
##[error]Exception calling ".ctor" with "1" argument(s): "Value cannot be null. Parameter name: authority"
##[section]Finishing: Deploy Service Fabric Application
##[section]Finishing: Release
Any ideas? Where can I look for possibly a better error message? If I deploy from my desktop and VS2015 it works fine. Thank you.
IN REPLY TO MATT THALMAN:
ConnectionEndpoint : {myapp.eastus.cloudapp.azure.com:19000}
FabricClientSettings : {
ClientFriendlyName : PowerShell-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
PartitionLocationCacheLimit : 100000
PartitionLocationCacheBucketCount : 1024
ServiceChangePollInterval : 00:02:00
ConnectionInitializationTimeout : 00:00:02
KeepAliveInterval : 00:00:20
HealthOperationTimeout : 00:02:00
HealthReportSendInterval : 00:00:00
HealthReportRetrySendInterval : 00:00:30
NotificationGatewayConnectionTimeout : 00:00:30
NotificationCacheUpdateTimeout : 00:00:30
AuthTokenBufferSize : 4096
}
There is the feedback of this issue, you can track it, based on that feedback, the workaround is using certificate authentication.
On the other hand, you can check the source code of that task from here.
I've got this issue when I try execute powershell without quotes ("").
Visual Studio during deploy in first time generate powershell script, which use for deploy. Try execute this script manually, and you will see which arguments put in parameters without quotes.
I have some weird problem with filebeat
I am using cloud formation to run my stack, and a part of that i am installing and running filebeat for log aggregation,
I inject the /etc/filebeat/filebeat.yml into the machine and then i need to restart filebeat.
The problem is that filebeat hangs. and the entire provisioning is stuck (note that if i ssh into the machine and issue the "sudo service filebeat restart myself, the entire provisioning becomes unstuck and continues). I tried restarting it both via the services section and the commands section of the cloudformation::init and they both hang.
I havent tried it via the userdata but thats the worst possible solution for it.
Any ideas why?
snippets for the template. both these hang as mentioned.
"commands" : {
"01" : {
"command" : "sudo service filebeat restart",
"cwd" : "~",
"ignoreErrors" : "false"
}
}
"services" : {
"sysvinit" : {
"filebeat" : {
"enabled" : "true",
"ensureRunning" : "true",
"files" : ["/etc/filebeat/filebeat.yml"]
}
}
}
Well, this does sound like some sort of lock.. According to the docs, you should insert a dependency to the file, in the filebeat service, under the services section, and that will cause the filebeat service restart you need.
Apparently, the services section supports a files attribute:
A list of files. If cfn-init changes one directly via the files block, this service will be restarted.
I'm developing a large scale system (MEAN Stack + ElasticSearch + RabbitMQ),
There are many different nodejs projects and queues working together.
I a few questions.
When I want run and test the whole system, I have to open a lot of terminal windows to run each project. How do I run them at once with ease of monitoring.
When I want to run the same project on multiple machine, How can I easily config all of them because sometime it takes too much time to move around and config them one bye one.
How to config, run, monitor and manage the whole system easily. For example, I want to know how many machine is running a project. Or sometime I want to change message queue name or ip address at once, I don't want to go to every machine on both project to change them one bye one
Sorry for my bad gramma, Feel free the edit.
Thanks in advance.
Have a look at PM2.
I'm using it for developement and in production
With this tool you can define a simple JSON file that defines your environment.
pm2_services.json
[{
"name" : "WORKER",
"script" : "worker.js",
"instances" : "3",
"port" : 3002,
"node-args" : "A_CONFIG_KEY"
}, {
"name" : "BACKEND",
"script" : "backend.js",
"instances" : "3",
"port" : 3000,
"node-args" : "A_CONFIG_KEY"
}, {
"name" : "FRONTEND",
"script" : "frontend.js",
"instances" : "3",
"port" : 3001,
"node-args" : "A_CONFIG_KEY"
}]
Then run pm2 start pm2_services.json
Relevant commands:
pm2 logs show the logs of all services
pm2 ls show the running
pm2 monit show the current cpu and memory state
pm2 start FRONTEND to start a service
pm2 stop FRONTEND to stop a service
NOTE:
Be careful with the watch feature of PM2.
In my case my CPU jumps up to permanent 100%.
To watch many file for change i use node-dev.
And here's the solution ti use it with PM2
[{
"name" : "WORKER",
"script" : "worker.js",
"instances" : 1,
"watch" : false,
"exec_interpreter" : "node-dev",
"exec_mode" : "fork_mode"
}]
You could write a Node project which launches all the other ones with appropriate arguments using child_process.
You could consider a tool like Puppet or Chef.
Same as #2.