I'm using the com.microsoft.azure.azurefunctions.gradle.plugin in version 1.11.0 in my project and can run it successfully locally when running "gradlew clean azureFunctionsRun".
My issue is, that it takes up to 7m 41sec in the step azureFunctionsPackage ("Step 8 of 8: Installing function extensions if needed" is where it spends most of the time).
When debugging, I see that I'm getting a "Read timed out" for https://rt.services.visualstudio.com:443
2022-10-06T11:27:17.709+0200 [DEBUG] [com.microsoft.applicationinsights.core.dependencies.http.impl.conn.PoolingHttpClientConnectionManager] Connection request: [route: {s}->https://rt.services.visualstudio.com:443][total kept alive: 3; route allocated: 1 of 20; total allocated: 3 of 200]
2022-10-06T11:27:17.711+0200 [DEBUG] [com.microsoft.applicationinsights.core.dependencies.http.wire] http-outgoing-3 << "[read] I/O error: Read timed out"
2022-10-06T11:27:17.711+0200 [DEBUG] [com.microsoft.applicationinsights.core.dependencies.http.impl.conn.PoolingHttpClientConnectionManager] Connection leased: [id: 3][route: {s}->https://rt.services.visualstudio.com:443][total kept alive: 2; route allocated: 1 of 20; total allocated: 3 of 200]
And also for https://dc.services.visualstudio.com:443:
2022-10-06T11:31:28.280+0200 [DEBUG] [com.microsoft.applicationinsights.core.dependencies.http.impl.conn.PoolingHttpClientConnectionManager] Connection request: [route: {s}->https://dc.services.visualstudio.com:443][total kept alive: 3; route allocated: 2 of 20; total allocated: 3 of 200]
2022-10-06T11:31:28.281+0200 [DEBUG] [com.microsoft.applicationinsights.core.dependencies.http.wire] http-outgoing-2 << "[read] I/O error: Read timed out"
2022-10-06T11:31:28.281+0200 [DEBUG] [com.microsoft.applicationinsights.core.dependencies.http.impl.conn.PoolingHttpClientConnectionManager] Connection leased: [id: 2][route: {s}->https://dc.services.visualstudio.com:443][total kept alive: 2; route allocated: 2 of 20; total allocated: 3 of 200]
But always after around 7min 41sec the execution finishes with:
Failed to check update for Azure Functions Core Tools
Failed to check update for Azure Functions Core Tools
Function extension installation done.
Successfully built Azure Functions.
Any way to block the "check update for Azure Functions Core Tools"?
Or any other way to solve the issue with the "timed out"?
Used:
Java 17
Gradle 7.5.1
Ubuntu 22.04
Azure Functions Plugin 1.11.0
I've added following entry to my host.json:
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
}
This doesn't really "solve" the problem, but skips the "install extension" step and therefore isn't stuck anymore. This is good enough for me.
Related
I am unable to start a 3 node universe with yb-master. I am following the docs here:
https://docs.yugabyte.com/latest/deploy/manual-deployment/start-masters/#verify-health
I created 3 master.conf files for 3 separate ips.
For 10.0.0.185:
--master_addresses=10.0.0.185:7100,10.0.0.141:7100,10.0.0.119:7100
--rpc_bind_addresses=10.0.0.185:7100
--fs_data_dirs=/home/mark/yuga/y1
For 10.0.0.141:
--master_addresses=10.0.0.141:7100,10.0.0.185:7100,10.0.0.119:7100
--rpc_bind_addresses=10.0.0.141:7100
--fs_data_dirs=/home/mark/yuga/y1
For 10.0.0.119:
--master_addresses=10.0.0.119:7100,10.0.0.141:7100,10.0.0.185:7100
--rpc_bind_addresses=10.0.0.119:7100
--fs_data_dirs=/home/mark/yuga/y1
I started each node up with the command ./bin/yb-master --flagfile master.conf >& ./y1/yb-master.out &
What seems to happen is that the first 2 nodes start up fine but as soon as I try to spin up the third node, the first node crashes and I end up with the error:
At first I thought may be this has to do with the servers I've got so I changed up the order I spin up the yb-masters, but it's always the first one I spin up first that dies.
Looking at the yb-master.INFO for each ip from yb1/yb-data/master/logs/yb-master.INFO with the command cat y1/yb-data/master/logs/yb-master.INFO | grep master I see:
The one that crashes:
This master's current role is: FOLLOWER
And the other two show:
I0110 00:02:56.565732 3292 client-internal.cc:2384] New master addresses: [10.0.0.141:7100,10.0.0.185:7100,10.0.0.119:7100, 10.0.0.141:7100, 10.0.0.185:7100, 10.0.0.119:7100]
E0110 00:02:58.069311 3162 async_initializer.cc:99] Failed to initialize client: Timed out (yb/rpc/rpc.cc:224): Could not locate the leader master: GetLeaderMasterRpc(addrs: [10.0.0.141:7100, 10.0.0.185:7100, 10.0.0.119:7100, 10.0.0.141:7100, 10.0.0.185:7100, 10.0.0.119:7100], num_attempts: 46) passed its deadline 1101.945s (passed: 1.504s): Network error (yb/util/net/socket.cc:551): recvmsg error: Connection refused (system error 111)
I0110 00:02:59.071501 3293 client-internal.cc:2355] Reinitialize master addresses from file: master.conf
I0110 00:02:59.071782 3293 client-internal.cc:2384] New master addresses: [10.0.0.141:7100,10.0.0.185:7100,10.0.0.119:7100, 10.0.0.141:7100, 10.0.0.185:7100, 10.0.0.119:7100]
and
I0110 00:02:57.610631 2128 master_service.cc:531] Patching role from leader to follower because of: Leader not ready to serve requests (yb/master/scoped_leader_shared_lock.cc:123): Leader not yet ready to serve requests: leader_ready_term_ = -1; cstate.current_term = 1 [suppressed 77 similar messages]
I0110 00:02:58.072002 2144 client-internal.cc:2355] Reinitialize master addresses from file: master.conf
I0110 00:02:58.072276 2144 client-internal.cc:2384] New master addresses: [10.0.0.119:7100,10.0.0.141:7100,10.0.0.185:7100, 10.0.0.119:7100, 10.0.0.141:7100, 10.0.0.185:7100]
I'm not sure why I'm seeing those errors, am I missing something while attempting to start up the 3 yb-masters?
I should also mention that I've ensured all 3 nodes have the correct system configurations, as mentioned here: https://docs.yugabyte.com/latest/deploy/manual-deployment/system-config/#setting-system-wide-ulimits
My app engine deployment (flexible environment, node js 12) has suddenly started failing, seemingly due to an issue with node js on the google side.
Build output here:
Step #1: Already have image (with digest): gcr.io/kaniko-project/executor#sha256:f87c11770a4d3ed33436508d206c584812cd656e6ed08eda1cff5c1ee44f5870
Step #1: [36mINFO[0m[0000] Removing ignored files from build context: [node_modules .dockerignore Dockerfile npm-debug.log yarn-error.log .git .hg .svn app.yaml]
Step #1: [36mINFO[0m[0004] Downloading base image gcr.io/google-appengine/nodejs#sha256:ef8be7b4dc77c3e71fbc85ca88186b13214af8f83b8baecc65e8ed85bb904ad5
Step #1: [36mINFO[0m[0019] Taking snapshot of full filesystem...
Step #1: [36mINFO[0m[0035] Using files from context: [/workspace]
Step #1: [36mINFO[0m[0036] COPY . /app/
Step #1: [36mINFO[0m[0036] Taking snapshot of files...
Step #1: [36mINFO[0m[0037] RUN /usr/local/bin/install_node '>=12.0.0'
Step #1: [36mINFO[0m[0037] cmd: /bin/sh
Step #1: [36mINFO[0m[0037] args: [-c /usr/local/bin/install_node '>=12.0.0']
Step #1: % Total % Received % Xferd Average Speed Time Time Time Current
Step #1: Dload Upload Total Spent Left Speed
100 32.1M 100 32.1M 0 0 66.9M 0 --:--:-- --:--:-- --:--:-- 66.8M
Step #1: % Total % Received % Xferd Average Speed Time Time Time Current
Step #1: Dload Upload Total Spent Left Speed
100 3838 100 3838 0 0 23116 0 --:--:-- --:--:-- --:--:-- 23260
Step #1: gpg: Signature made Tue Sep 8 15:43:07 2020 UTC using RSA key ID C17AB93C gpg: Can't check signature: public key not found
Step #1: The Node.js binary could not be verified.
Step #1: This means it may not be an officially released Node.js binary
Step #1: or may have been tampered with.
Step #1:
Step #1: Aborting the installation.
Step #1:
Step #1: The installation can be forced using the --ignore-verification-failure
Step #1: flag. However, it is strongly recommended that you install a version
Step #1: of Node.js that can be verified.
Step #1:
Step #1: Node installation failed: /opt/gcp/runtime/bootstrap_node exited with a non-zero exit code: 1
Step #1: error building image: error building stage: waiting for process to exit: exit status 1
Finished Step #1
ERROR
ERROR: build step 1 "gcr.io/kaniko-project/executor#sha256:f87c11770a4d3ed33436508d206c584812cd656e6ed08eda1cff5c1ee44f5870" failed: step exited with non-zero status: 1
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Is anybody else seeing this issue?
Thanks
Chris
In our case, the thing was that App Engine installed Node 14.10.0 because we had "node": "14.x" in the package.json. Seems like the latest release has some issues.
I fixed deploy by changing the engine to a fixed version:
"engines": {
"node": "14.9"
}
If you are using 12.x - try some of the previous version that worked.
The latest that works is 14.16.0. Add the following in your package.json
...
"engines": {
"node": "14.16.0"
},
...
Then deploy with gcloud app deploy. The GCP issue to follow is https://github.com/GoogleCloudPlatform/nodejs-docker/issues/214
This happens quite often whenever a new Nodejs version is released as GAE builder "is not entirely backwards compatible between minor version upgrades" (Source: https://github.com/GoogleCloudPlatform/nodejs-docker/issues/214#issuecomment-1276728834)
The fix/solution would be to pin down the last version that worked (in most cases, the 2nd most recent that was released: https://nodejs.org/en/download/releases/). You will most likely notice that a nodejs version was released around the time your build started failing.
Hello, I´m trying to deploy again a simple service that was working fine (hello world)
C:\Users\userssss\hola-mundo>sls invoke -f hello -s dev
{
"statusCode": 200,
"body": "{\n \"message\": \"Go Serverless v1.0! Your function executed successfully!\",\n \"input\": {}\n}"
}
But suddenly it started to give error. **
**I tried to set NODE_TLS_REJECT_UNAUTHORIZED=0 but still doing errors
Could you help me please, just starting to learn how to use AWS LAMBDA,thank you
C:\Users\userssss\hola-mundo>sls invoke -f hello -s dev
Serverless: Recoverable error occurred (unable to verify the first certificate), sleeping for 5 seconds. Try 1 of 4
Serverless: Recoverable error occurred (unable to verify the first certificate), sleeping for 5 seconds. Try 2 of 4
Serverless: Recoverable error occurred (unable to verify the first certificate), sleeping for 5 seconds. Try 3 of 4
Serverless: Recoverable error occurred (unable to verify the first certificate), sleeping for 5 seconds. Try 4 of 4
Serverless Error ---------------------------------------
unable to verify the first certificate
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: win32
Node Version: 12.13.1
Framework Version: 1.58.0
Plugin Version: 3.2.5
SDK Version: 2.2.1
Components Core Version: 1.1.2
Components CLI Version: 1.4.0
C:\Users\userssss\hola-mundo>sls deploy
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Recoverable error occurred (unable to verify the first certificate), sleeping for 5 seconds. Try 1 of 4
Serverless: Recoverable error occurred (unable to verify the first certificate), sleeping for 5 seconds. Try 2 of 4
Serverless: Recoverable error occurred (unable to verify the first certificate), sleeping for 5 seconds. Try 3 of 4
Serverless: Recoverable error occurred (unable to verify the first certificate), sleeping for 5 seconds. Try 4 of 4
Serverless Error ---------------------------------------
unable to verify the first certificate
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: win32
Node Version: 12.13.1
Framework Version: 1.58.0
Plugin Version: 3.2.5
SDK Version: 2.2.1
Components Core Version: 1.1.2
Components CLI Version: 1.4.0``
I'm assuming you're behind a corporate firewall of some kind? If so, there are some documented options here: https://github.com/serverless/serverless/issues/3256
First I'd try NODE_TLS_REJECT_UNAUTHORIZED=0 sls deploy. If that works, you can try some of the permanent solutions I linked to.
We are trying to scan our docker images using Anchore Engine Jenkins plugin.
Currently we create our application docker images, push it in our own private local registry and then deploy it in our test environments.
Now, we want to setup docker image scanning in our CI/CD process to check for any vulnerabilities.
We have installed Anchore Engine using the recommended Docker-Compose yaml method given in the Documentation link:
https://anchore.freshdesk.com/support/solutions/articles/36000020729-install-on-docker-swarm
Post installation, we installed the
Anchore Container Image Scanner Plugin in Jenkins.
We configured the plugin as mentioned in the document link:
https://wiki.jenkins.io/display/JENKINS/Anchore+Container+Image+Scanner+Plugin
However, the scanning fails. Error Message as follows:
2018-10-11T07:01:44.647 INFO AnchoreWorker Analysis request accepted, received image digest sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8
2018-10-11T07:01:44.647 INFO AnchoreWorker Waiting for analysis of 10.180.25.2:5000/hello-world:latest, polling status periodically
2018-10-11T07:01:44.647 DEBUG AnchoreWorker anchore-engine get policy evaluation URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true
2018-10-11T07:01:44.648 DEBUG AnchoreWorker Attempting anchore-engine get policy evaluation (1/300)
2018-10-11T07:01:44.675 DEBUG AnchoreWorker anchore-engine get policy evaluation failed. URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true, status: HTTP/1.1 404 NOT FOUND, error: {
"detail": {},
"httpcode": 404,
"message": "image is not analyzed - analysis_status: not_analyzed"
}
NOTE:
In Image TAG 10.180.25.2:5000/hello-world:latest, 10.180.25.2:5000 is our local private registry and hello-world:latest is latest hello-world image available in docker hub which we pulled and pushed in our registry to try out image scanning using Anchore-Engine.
Unfortunately we are not able to find much resource online to try and resolve the above mentioned issue.
Anyone who might have worked on Anchore-Engine, please may I request to have a look and help us resolve this issue.
Also, any suggestions or alternatives to anchore-engine or detailed steps in case we might have missed anything would be really appreciated.
End of the output is as follows:
2018-10-15T00:48:43.880 WARN AnchoreWorker anchore-engine get policy evaluation failed. HTTP method: GET, URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true, status: 404, error: {
"detail": {},
"httpcode": 404,
"message": "image is not analyzed - analysis_status: not_analyzed"
}
2018-10-15T00:48:43.880 WARN AnchoreWorker Exhausted all attempts polling anchore-engine. Analysis is incomplete for sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8
2018-10-15T00:48:43.880 ERROR AnchorePlugin Failing Anchore Container Image Scanner Plugin step due to errors in plugin execution
hudson.AbortException: Timed out waiting for anchore-engine analysis to complete (increasing engineRetries might help). Check above logs for errors from anchore-engine
at com.anchore.jenkins.plugins.anchore.BuildWorker.runGatesEngine(BuildWorker.java:480)
at com.anchore.jenkins.plugins.anchore.BuildWorker.runGates(BuildWorker.java:343)
at com.anchore.jenkins.plugins.anchore.AnchoreBuilder.perform(AnchoreBuilder.java:338)
at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
I also checked status and found below:
docker run anchore/engine-cli:latest anchore-cli --u admin --p admin123 --url http://172.18.0.1:8228/v1 system status
Service analyzer (dockerhostid-anchore-engine, http://anchore-engine:8084): up
Service catalog (dockerhostid-anchore-engine, http://anchore-engine:8082): up
Service policy_engine (dockerhostid-anchore-engine, http://anchore-engine:8087): down (unavailable)
Service simplequeue (dockerhostid-anchore-engine, http://anchore-engine:8083): up
Service apiext (dockerhostid-anchore-engine, http://anchore-engine:8228): up
Service kubernetes_webhook (dockerhostid-anchore-engine, http://anchore-engine:8338): up
Engine DB Version: 0.0.7
Engine Code Version: 0.2.4
It seems service policy engine is down
Service policy_engine (dockerhostid-anchore-engine, http://anchore-engine:8087): down (unavailable)
I also checked the docker logs . I found below error:
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [DEBUG] service (policy_engine) starting in: 4
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [INFO] Registration complete.
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [INFO] Checking feeds client credentials
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [DEBUG] Initializing a feeds client
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [bootstrap] [DEBUG] init values: [None, None, None, (), None, None]
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [bootstrap] [DEBUG] using values: ['https://ancho.re/v1/service/feeds', 'https://ancho.re/oauth/token', 'https://ancho.re/v1/account/users', 'anon#ancho.re', 3, 60]
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [urllib3.connectionpool] [DEBUG] Starting new HTTPS connection (1): ancho.re
[service:policy_engine] 2018-10-15 09:37:50+0000 [-] [bootstrap] [ERROR] Preflight checks failed with error: HTTPSConnectionPool(host='ancho.re', port=443): Max retries exceeded with url: /v1/account/users/anon#ancho.re (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7ffa905f0b90>: Failed to establish a new connection: [Errno 113] No route to host',)). Aborting service startup
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/anchore_manager/cli/service.py", line 158, in startup_service
raise Exception("process exited: " + str(rc))
Exception: process exited: 1
[anchore-policy-engine] [anchore_manager.cli.service/startup_service()] [INFO] service process exited at (Mon Oct 15 09:37:50 2018): process exited: 1
[anchore-policy-engine] [anchore_manager.cli.service/startup_service()] [INFO] exiting service thread
Thanks and Regards,
Rohan Shetty
When images are added to anchore-engine, they are queued for analysis which moves them through a simple state machine that starts with ‘not_analyzed’, goes to ‘analyzing’ and finally ends in either ‘analyzed’ or ‘analysis_failed’. Only when an image has reached ‘analyzed’ will a policy evaluation be possible.
The anchore Jenkins plugin will add an image, then poll the engine for image status/evaluation for the configured number of tries (default 300). Once the image goes to ‘analyzed’ (where policy evaluation is possible), the plugin will then receive a policy evaluation result from the engine.
The plugin will fail the build (by default) if the max retries has been performed and the image has not reached ‘analyzed’, if the image does reach ‘analyzed’ but the policy evaluation is producing a ‘fail’ result (meaning the image didn’t pass your configured policy checks). Note that all build failure behavior can be controlled in the plugin (I.e. there are options to allow the plugin to succeed even if the analysis or image eval fails).
You’ll need to look at the end of the output from your build run (instead of just the beginning from your post), and combined with the information above, it should be clear which scenario is causing the plugin to fail the build.
We have resolved the issue.
Root Cause:
We were not able to establish a successful https connection to URL : https://ancho.re from within the anchore-engine docker container.
As a result the service:policy_engine was not able to start.
https://ancho.re is required to download policy feeds and sync-up periodically. Without these policy anchore-engine won't be able to analyse the docker images.
Solution:
1) We passed a HTTPS_PROXY URL as an environment variable in the docker-compose.yaml of anchore-engine.
We used this proxy URL to bypass restrictions in our environment and establish a connection with https://ancho.re url.
2) Restarted the docker containers.
Finally we got all services up and running including Anchore policy-engine.
FYI:
It takes a while to download all the required Feeds depending on your internet speed.
Lastly, Thanks to the Anchore community for quick responses and support over slack.
Hope this helps.
Warm Regards,
Rohan Shetty
I have a problem with Azure Service Fabric.
I have installed it (on Windows 7) as it was said in https://azure.microsoft.com/en-gb/documentation/articles/service-fabric-get-started/.
Then I have tried to run a Service Fabric application from Visual Studio 2015. I got an error “Connect-ServiceFabricCluster : No cluster endpoint is reachable, please check if there is connectivity/firewall/DNS issue”.
Here is the fill log of that run:
1>------ Build started: Project: Application2, Configuration: Debug x64 ------
2>------ Deploy started: Project: Application2, Configuration: Debug x64 ------
-------- Package started: Project: Application2, Configuration: Debug x64 ------
Application2 -> c:\temp\Application2\Application2\pkg\Debug
-------- Package: Project: Application2 succeeded, Time elapsed: 00:00:01.7361084 --------
2>Started executing script 'Set-LocalClusterReady'.
2>Import-Module 'C:\Program Files\Microsoft SDKs\Service Fabric\Tools\Scripts\DefaultLocalClusterSetup.psm1'; Set-LocalClusterReady
2>--------------------------------------------
2>Local Service Fabric Cluster is not setup...
2>Please wait while we setup the Local Service Fabric Cluster. This may take few minutes...
2>
2>Using Cluster Data Root: C:\SfDevCluster\Data
2>Using Cluster Log Root: C:\SfDevCluster\Log
2>
2>Create node configuration succeeded
2>Starting service FabricHostSvc. This may take a few minutes...
2>
2>Waiting for Service Fabric Cluster to be ready. This may take a few minutes...
2>Local Cluster ready status: 4% completed.
2>Local Cluster ready status: 8% completed.
2>Local Cluster ready status: 12% completed.
2>Local Cluster ready status: 17% completed.
2>Local Cluster ready status: 21% completed.
2>Local Cluster ready status: 25% completed.
2>Local Cluster ready status: 29% completed.
2>Local Cluster ready status: 33% completed.
2>Local Cluster ready status: 38% completed.
2>Local Cluster ready status: 42% completed.
2>Local Cluster ready status: 46% completed.
2>Local Cluster ready status: 50% completed.
2>Local Cluster ready status: 54% completed.
2>Local Cluster ready status: 58% completed.
2>Local Cluster ready status: 62% completed.
2>Local Cluster ready status: 67% completed.
2>Local Cluster ready status: 71% completed.
2>Local Cluster ready status: 75% completed.
2>Local Cluster ready status: 79% completed.
2>Local Cluster ready status: 83% completed.
2>Local Cluster ready status: 88% completed.
2>Local Cluster ready status: 92% completed.
2>Local Cluster ready status: 96% completed.
2>Local Cluster ready status: 100% completed.
2>WARNING: Service Fabric Cluster is taking longer than expected to connect.
2>
2>Waiting for Naming Service to be ready. This may take a few minutes...
2>Connect-ServiceFabricCluster : **No cluster endpoint is reachable, please check
2>if there is connectivity/firewall/DNS issue.**
2>At C:\Program Files\Microsoft SDKs\Service
2>Fabric\Tools\Scripts\ClusterSetupUtilities.psm1:521 char:12
2>+ [void](Connect-ServiceFabricCluster #connParams)
2>+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2> + CategoryInfo : InvalidOperation: (:) [Connect-ServiceFabricClus
2> ter], FabricException
2> + FullyQualifiedErrorId : TestClusterConnectionErrorId,Microsoft.ServiceFa
2> bric.Powershell.ConnectCluster
2>
2>Naming Service ready status: 8% completed.
2>Naming Service ready status: 17% completed.
2>Naming Service ready status: 25% completed.
2>Naming Service ready status: 33% completed.
2>Naming Service ready status: 42% completed.
2>Naming Service ready status: 50% completed.
2>Naming Service ready status: 58% completed.
2>Naming Service ready status: 67% completed.
2>Naming Service ready status: 75% completed.
2>Naming Service ready status: 83% completed.
2>Naming Service ready status: 92% completed.
2>Naming Service ready status: 100% completed.
2>WARNING: Naming Service is taking longer than expected to be ready...
2>Local Service Fabric Cluster created successfully.
2>--------------------------------------------------
2>Launching Service Fabric Local Cluster Manager...
2>You can use Service Fabric Local Cluster Manager (system tray application) to manage your local dev cluster.
2>Finished executing script 'Set-LocalClusterReady'.
2>Time elapsed: 00:07:01.8147993
2>The PowerShell script failed to execute.
========== Build: 1 succeeded, 0 failed, 1 up-to-date, 0 skipped ==========
========== Deploy: 0 succeeded, 1 failed, 0 skipped ==========
This worked for me and I stumbled upon the solution on an MSDN forum.
Most likely your C++ runtime is corrupt and that needs to be reinstalled.
You will have to manually execute vcredist_x64.exe which can be found at
C:\Program Files\Microsoft Service Fabric\bin\Fabric\Fabric.Code\vcredist_x64.exe
Once that is done, you can choose to reboot your machine or not, I chose to reboot it and then ran the following commands
C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\CleanCluster.ps1
C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\DevClusterSetup.ps1
I hope this helps!
I had to disconnect my VPN from my office as well as the directions that Varun had shown above.
Thanks #Varun for sharing this!!
Once I connected my VPN I could not run the system again.
Hope this helps someone.
After trying many solutions, I found that this GitHub Issue helped to add an entry to the setup configuration then I had to recreate the cluster.
Configuration File
C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\NonSecure\OneNode\ClusterManifestTemplate.json:
Entry to add
{
"name": "FabricContainerAppsEnabled",
"value": "false"
}
By default Service Fabric development instance is running to listen only on the loopback address(simply run netstat -an command in your CMD to see which ports are open):
As you can see on the screen above my SF instance are listen on the port 19000 for all addresses(available external connections)
By default port 19000 listen only on the loopback address(127.0.0.1)
[::1]:19000
All you need is to change your IPAddressOrFQDN from localhost to your IPv4 external address. The hange should be visible in manifest file:
All SF cluster configuration files in windows 7 are located in C:\SfDevCluster\Data. There you can find many XML documents like:
clusterManifest
FabricHostSettings
To change IPAddressOrFQDN I have replaced all occurrences of the word 'localhost' over 'my.ip.address.here'. The files to be modified are also deeper:
C:\SfDevCluster\Data\_Node_0\Fabric
C:\SfDevCluster\Data\_Node_0\Fabric\Fabric.Data
C:\SfDevCluster\Data\_Node_0\Fabric\Fabric.Config.0.0
and so on
before changes stop your cluster:
And after all run it again:
There is another - better method. You can change generating scripts in directory C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup.
After cluster reset this settings will be lost. Remember that you do it at your own risk, the development version is unsecure and should not be used in a production environment.
If after hanges you cannot connect to SF - make sure the firewall doesn't block the port(you can temporary disable your firewall)
If the C++ runtime does not solve your issue you can narrow the solution down to just firewall issue. The network is not allowing the User to run the FabricHostSvc Service which you can check in services.msc ,which is due to security issues. If you just are able to disable your firewall/change to network without restrictions the issue should resolve on its own.
If you are able to get the FabricHostSvc started your issue will be solved.
Hope this helps.....
Hey First answer ( #Varun Rathore gave ) worked for me. But as I was trying to deploy container on service fabric locally, switching from 1 node cluster to 5 node cluster again and again, this error comes often. So what I did is open powershell as Administrator and follow the next 2 steps.
First go to following Path
cd "C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup"
Second run these 2 files
.\CleanCluster.ps1
.\DevClusterSetup.ps1
This will setup 1 node cluster on your local machine.
You can do this from service fabric cluster manager but it did not work for me. And every time repair vcredist_x64.exe file and reboot your computer (windows 10 Pro) is frustating. This works for me.
Either you can reset fabric cluster (if you don't have stateful service, and data persist not needed), or else you can re-launch fabric cluster, sometime switching nodes also helpful.