Apache Spark compile failed while installing Netty - apache-spark

We untar spark-0.9.0-incubating.tgz and trying to build it for use with Yarn.
SPARK_HADOOP_VERSION=2.0.0-cdh4.6.0 SPARK_YARN=true sbt/sbt assembly
...
[info] Resolving io.netty#netty-all;4.0.13.Final ...
[error] Server access Error: Connection timed out url=https://oss.sonatype.org/content/repositories/snapshots/io/netty/netty-all/4.0.13.Final/netty-all-4.0.13.Final.pom
[error] Server access Error: Connection timed out url=https://oss.sonatype.org/service/local/staging/deploy/maven2/io/netty/netty-all/4.0.13.Final/netty-all-4.0.13.Final.pom
...
If I just cut-paste the url into a browser, I get:
404 - ItemNotFoundException
Retrieval of /io/netty/netty-all/4.0.13.Final/netty-all-4.0.13.Final.pom from M2Repository(id=snapshots) is forbidden by repository policy SNAPSHOT.
org.sonatype.nexus.proxy.ItemNotFoundException: Retrieval of /io/netty/netty-all/4.0.13.Final/netty-all-4.0.13.Final.pom from M2Repository(id=snapshots) is forbidden by repository policy SNAPSHOT.
at org.sonatype.nexus.proxy.maven.AbstractMavenRepository.doRetrieveItem(AbstractMavenRepository.java:380)
at org.sonatype.nexus.proxy.maven.maven2.M2Repository.doRetrieveItem(M2Repository.java:396)
at org.sonatype.nexus.proxy.repository.AbstractRepository.retrieveItem(AbstractRepository.java:765)
at org.sonatype.nexus.proxy.repository.AbstractRepository.retrieveItem(AbstractRepository.java:608)
at org.sonatype.nexus.proxy.router.DefaultRepositoryRouter.retrieveItem(DefaultRepositoryRouter.java:155)
at org.sonatype.nexus.web.content.NexusContentServlet.doGet(NexusContentServlet.java:359)
at org.sonatype.nexus.web.content.NexusContentServlet.service(NexusContentServlet.java:331)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
I have seen this reported in a number of places but no solution. Is this error because we are behind a corporate firewall, or is this due to something else? Please advise.

I had proxy set as environment variables, but it appears they are not being picked up. Adding them in sbt directly worked for me.
Edit $SPARK_HOME/sbt/sbt
For example,
EXTRA_ARGS="-Dhttp.proxySet=true -Dhttp.proxyHost=myproxy.mycompany.com -Dhttp.proxyPort=80 -Dhttps.proxySet=true -Dhttps.proxyHost=myproxy.mycompany.com -Dhttps.proxyPort=80 -Dftp.proxySet=true -Dftp.proxyHost=myproxy.mycompany.com -Dftp.proxyPort=80 -Dhttp.nonProxyHosts=mydomain -Dhttps.nonProxyHosts=mydomain -Dftp.nonProxyHosts=mydomain"

Related

SVN Put request failed at 1059.44MB

Error: Commit failed (details follow): Error: While preparing 'C:\IT-TEST\FILE.iso' for commit Error: PUT request on '/test-repo/!svn/txr/6581-57h/IT-TEST/FILE.iso' failed
I got this error when trying to upload(with SVNTORTOISE) to my repo file which size is more than 1059~1060MB.
What in server side need to be edit?
OS:Oracle Linux
Software: SVN+HTTPd
Resolved by adding LimitRequestBody 0 to httpd.conf and reloading the service.
Thanks all for help :)

Anchore Engine - Jenkins CI plugin

We are trying to scan our docker images using Anchore Engine Jenkins plugin.
Currently we create our application docker images, push it in our own private local registry and then deploy it in our test environments.
Now, we want to setup docker image scanning in our CI/CD process to check for any vulnerabilities.
We have installed Anchore Engine using the recommended Docker-Compose yaml method given in the Documentation link:
https://anchore.freshdesk.com/support/solutions/articles/36000020729-install-on-docker-swarm
Post installation, we installed the
Anchore Container Image Scanner Plugin in Jenkins.
We configured the plugin as mentioned in the document link:
https://wiki.jenkins.io/display/JENKINS/Anchore+Container+Image+Scanner+Plugin
However, the scanning fails. Error Message as follows:
2018-10-11T07:01:44.647 INFO AnchoreWorker Analysis request accepted, received image digest sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8
2018-10-11T07:01:44.647 INFO AnchoreWorker Waiting for analysis of 10.180.25.2:5000/hello-world:latest, polling status periodically
2018-10-11T07:01:44.647 DEBUG AnchoreWorker anchore-engine get policy evaluation URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true
2018-10-11T07:01:44.648 DEBUG AnchoreWorker Attempting anchore-engine get policy evaluation (1/300)
2018-10-11T07:01:44.675 DEBUG AnchoreWorker anchore-engine get policy evaluation failed. URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true, status: HTTP/1.1 404 NOT FOUND, error: {
"detail": {},
"httpcode": 404,
"message": "image is not analyzed - analysis_status: not_analyzed"
}
NOTE:
In Image TAG 10.180.25.2:5000/hello-world:latest, 10.180.25.2:5000 is our local private registry and hello-world:latest is latest hello-world image available in docker hub which we pulled and pushed in our registry to try out image scanning using Anchore-Engine.
Unfortunately we are not able to find much resource online to try and resolve the above mentioned issue.
Anyone who might have worked on Anchore-Engine, please may I request to have a look and help us resolve this issue.
Also, any suggestions or alternatives to anchore-engine or detailed steps in case we might have missed anything would be really appreciated.
End of the output is as follows:
2018-10-15T00:48:43.880 WARN AnchoreWorker anchore-engine get policy evaluation failed. HTTP method: GET, URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true, status: 404, error: {
"detail": {},
"httpcode": 404,
"message": "image is not analyzed - analysis_status: not_analyzed"
}
2018-10-15T00:48:43.880 WARN AnchoreWorker Exhausted all attempts polling anchore-engine. Analysis is incomplete for sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8
2018-10-15T00:48:43.880 ERROR AnchorePlugin Failing Anchore Container Image Scanner Plugin step due to errors in plugin execution
hudson.AbortException: Timed out waiting for anchore-engine analysis to complete (increasing engineRetries might help). Check above logs for errors from anchore-engine
at com.anchore.jenkins.plugins.anchore.BuildWorker.runGatesEngine(BuildWorker.java:480)
at com.anchore.jenkins.plugins.anchore.BuildWorker.runGates(BuildWorker.java:343)
at com.anchore.jenkins.plugins.anchore.AnchoreBuilder.perform(AnchoreBuilder.java:338)
at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
I also checked status and found below:
docker run anchore/engine-cli:latest anchore-cli --u admin --p admin123 --url http://172.18.0.1:8228/v1 system status
Service analyzer (dockerhostid-anchore-engine, http://anchore-engine:8084): up
Service catalog (dockerhostid-anchore-engine, http://anchore-engine:8082): up
Service policy_engine (dockerhostid-anchore-engine, http://anchore-engine:8087): down (unavailable)
Service simplequeue (dockerhostid-anchore-engine, http://anchore-engine:8083): up
Service apiext (dockerhostid-anchore-engine, http://anchore-engine:8228): up
Service kubernetes_webhook (dockerhostid-anchore-engine, http://anchore-engine:8338): up
Engine DB Version: 0.0.7
Engine Code Version: 0.2.4
It seems service policy engine is down
Service policy_engine (dockerhostid-anchore-engine, http://anchore-engine:8087): down (unavailable)
I also checked the docker logs . I found below error:
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [DEBUG] service (policy_engine) starting in: 4
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [INFO] Registration complete.
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [INFO] Checking feeds client credentials
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [DEBUG] Initializing a feeds client
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [bootstrap] [DEBUG] init values: [None, None, None, (), None, None]
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [bootstrap] [DEBUG] using values: ['https://ancho.re/v1/service/feeds', 'https://ancho.re/oauth/token', 'https://ancho.re/v1/account/users', 'anon#ancho.re', 3, 60]
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [urllib3.connectionpool] [DEBUG] Starting new HTTPS connection (1): ancho.re
[service:policy_engine] 2018-10-15 09:37:50+0000 [-] [bootstrap] [ERROR] Preflight checks failed with error: HTTPSConnectionPool(host='ancho.re', port=443): Max retries exceeded with url: /v1/account/users/anon#ancho.re (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7ffa905f0b90>: Failed to establish a new connection: [Errno 113] No route to host',)). Aborting service startup
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/anchore_manager/cli/service.py", line 158, in startup_service
raise Exception("process exited: " + str(rc))
Exception: process exited: 1
[anchore-policy-engine] [anchore_manager.cli.service/startup_service()] [INFO] service process exited at (Mon Oct 15 09:37:50 2018): process exited: 1
[anchore-policy-engine] [anchore_manager.cli.service/startup_service()] [INFO] exiting service thread
Thanks and Regards,
Rohan Shetty
When images are added to anchore-engine, they are queued for analysis which moves them through a simple state machine that starts with ‘not_analyzed’, goes to ‘analyzing’ and finally ends in either ‘analyzed’ or ‘analysis_failed’. Only when an image has reached ‘analyzed’ will a policy evaluation be possible.
The anchore Jenkins plugin will add an image, then poll the engine for image status/evaluation for the configured number of tries (default 300). Once the image goes to ‘analyzed’ (where policy evaluation is possible), the plugin will then receive a policy evaluation result from the engine.
The plugin will fail the build (by default) if the max retries has been performed and the image has not reached ‘analyzed’, if the image does reach ‘analyzed’ but the policy evaluation is producing a ‘fail’ result (meaning the image didn’t pass your configured policy checks). Note that all build failure behavior can be controlled in the plugin (I.e. there are options to allow the plugin to succeed even if the analysis or image eval fails).
You’ll need to look at the end of the output from your build run (instead of just the beginning from your post), and combined with the information above, it should be clear which scenario is causing the plugin to fail the build.
We have resolved the issue.
Root Cause:
We were not able to establish a successful https connection to URL : https://ancho.re from within the anchore-engine docker container.
As a result the service:policy_engine was not able to start.
https://ancho.re is required to download policy feeds and sync-up periodically. Without these policy anchore-engine won't be able to analyse the docker images.
Solution:
1) We passed a HTTPS_PROXY URL as an environment variable in the docker-compose.yaml of anchore-engine.
We used this proxy URL to bypass restrictions in our environment and establish a connection with https://ancho.re url.
2) Restarted the docker containers.
Finally we got all services up and running including Anchore policy-engine.
FYI:
It takes a while to download all the required Feeds depending on your internet speed.
Lastly, Thanks to the Anchore community for quick responses and support over slack.
Hope this helps.
Warm Regards,
Rohan Shetty

Request cannot be proceed error in Jhipster registry version 4.0.0

I downloaded jhipster registry version 4.0.0, imported and run the registry successfully, when I open localhost:8761 I am getting request cannot be proceed error, there is no error information also.
When I used command mvnw I am getting an exception in thread "main" javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target.
One way is to use http protocol instead of https.
To do this go into your project directory > .mvn > wrapper > maven-wrapper.properties.
You will find something like distributionUrl=http://repo1.maven.org/maven2/org/apache/maven/apache-maven/3.5.3/apache-maven-3.5.3-bin.zip.
Change this url to using http instead of https.
Other solutions are more complex but there is no harm in trying.
Problems connecting via HTTPS/SSL through own Java client
Calling web service: javax.net.ssl.SSLException: Received fatal alert: protocol_version
javax.net.ssl.SSLException: Received fatal alert: protocol_version
Simply pass "-Dhttps.protocols=TLSv1.2" as extra argument

Chef-server-ctl reconfigure/ Creating Admin User on chef server

I am fairly new to Linux (and brand new to chef) and I have ran into an issue when setting up my chef server. I am trying to create an admin user with the command
sudo chef-server-ctl user-create admin Admin Ladmin admin#example.com
examplepass -f admin.pem
but after I keep getting this error:
ERROR: Connection refused connecting...
ERROR: Connection refused connecting to https://127.0.0.1/users/, retry 5/5
ERROR: Network Error: Connection refused - Connection refused
connecting to https://..., giving up
Check your knife configuration and network settings
I also noticed that when I ran chef-server-ctl I got this output:
[2016-12-21T13:24:59-05:00] ERROR: Running exception handlers Running
handlers complete
[2016-12-21T13:24:59-05:00] ERROR: Exception
handlers complete Chef Client failed. 0 resources updated in 01 seconds
[2016-12-21T13:24:59-05:00] FATAL: Stacktrace dumped to
/var/opt/opscode/local-mode-cache/chef-stacktrace.out
[2016-12-21T13:24:59-05:00] FATAL: Please provide the contents of the
stacktrace.out file if you file a bug report
[2016-12-21T13:24:59-05:00] FATAL:
Chef::Exceptions::CannotDetermineNodeName: Unable to determine node
name: configure node_name or configure the system's hostname and fqdn
I read that this error is due to a prerequisite mistake but I'm uncertain as to what it means or how to fix it. So any input would be greatly appreciated.
Your server does not have a valid FQDN (aka full host name). You'll have to fix this before installing Chef server.

Spark Error: Error downloading resource: SSL connect error

I am attempting to run a job on a Spark cluster setup in Mesos. I can run a job if I copy the jar to the server and then use a file: URL, but I cannot get spark to download a jar using https:. Every time I do get the error below in the stderr file.
I0226 00:11:05.618361 22652 logging.cpp:172] INFO level logging started!
I0226 00:11:05.618552 22652 fetcher.cpp:409] Fetcher Info: ...
I0226 00:11:05.619721 22652 fetcher.cpp:364] Fetching URI 'https://jenkins.company.com/nexus/...
I0226 00:11:05.619738 22652 fetcher.cpp:238] Fetching directly into the sandbox directory
I0226 00:11:05.619751 22652 fetcher.cpp:176] Fetching URI 'https://jenkins.company.com/nexus/...
I0226 00:11:05.619762 22652 fetcher.cpp:126] Downloading resource from 'https://jenkins.company.com/nexus/...
Failed to fetch 'https://jenkins.company.com/nexus/... ': Error downloading resource: SSL connect error
Failed to synchronize with slave (it's probably exited)
I am able to use wget to download the jar from the specified URL. I have also verified that the JDK on the server has the correct certificate for the nexus server where I am attempting to download the jar from.
I am new to Spark and Mesos and any help resolving this issue would be greatly appreciated.
Did you specify you private Nexus repository with the --repository flag on application start?
I personally never use encryption together with Spark, but from the docs it seems to be possible/necessary to configure it within Spark. I guess just for the SDKs is not enough.
See
http://spark.apache.org/docs/latest/submitting-applications.html#advanced-dependency-management
http://spark.apache.org/docs/latest/configuration.html#encryption

Resources