Can not change a file in docker container? - linux

I run a real elasticsearch cluster in a production environment using consul +overlay +docker,I attach a container, When I change elasticsearch.yml,another file which name is elasticsearch.yml~ appears, then I run elasticsearch ,there has a error
Exception in thread "main" ElasticsearchException[Failed to load logging configuration]; nested: NoSuchFileException[/usr/local/biop/elasticsearch/config/elasticsearch.yml~];
Likely root cause: java.nio.file.NoSuchFileException: /usr/local/biop/elasticsearch/config/elasticsearch.yml~
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:225)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.next(FileTreeWalker.java:372)
at java.nio.file.Files.walkFileTree(Files.java:2706)
at org.elasticsearch.common.logging.log4j.LogConfigurator.resolveConfig(LogConfigurator.java:142)
at org.elasticsearch.common.logging.log4j.LogConfigurator.configure(LogConfigurator.java:103)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:243)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
I don't know why there will have this file which name is 'elasticsearch.yml~'and it can't be deleted. How to solve this problem?Thanks.

The extra file is created by your text/code editor. Once you are done editing your text file, you should simply delete it.
It seems that docker try to use it as a configuration file.
However, it might be that the error "[Failed to load logging configuration]" is not caused by that itself, there is probably not so good configuration about logging in your elastisearch.yml anyway.

Related

SoapUI specify alternate logdir as a property defined on the command line

I'm upgrading from SoapUI 5.4.0 to 5.7.0 and trying to put the log files in a specific directory. Note: The alternate error logs directory was working prior to the upgrade.
I have both the following specified in my JAVA_OPTS for SoapUITestCaseRunner
-Dsoapui.logroot="%SOAPUI_LOGSDIR%"
-Dsoapui.log4j.config="%SOAPUI_HOME%/soapui-log4j.xml"
In my soapui-log4j.xml I specify the error file as:
<RollingFile name="ERRORFILE"
fileName="${soapui.logroot}/soapui-errors.log"
filePattern="${soapui.logroot}/soapui-errors.log.%i"
append="true">
The error file then gets created without resolving ${soapui.logroot} e.g.
$ find . -name "*errors*"
./${soapui.logroot}/soapui-errors.log
I also tried it as lookup but ended up with this:
ERROR Unable to create file ${sys:soapui.logroot}/soapui-errors.log java.io.IOException: The filename, directory name, or volume label syntax is incorrect
Am I missing anything? Any ideas for next steps?
I tried replacing
fileName="${soapui.logroot}/soapui-errors.log"
with
fileName="${sys:soapui.logroot}/soapui-errors.log"
and it worked for me.
I no longer see unresolved '${soapui.logroot}' directory created.
A

Problems with log4j2

I have two problems with log4j2:
I cannot figure out how to specify the log root level in the command line.
I execute runnable jar with log4j2.xml config file
java -Dlog4j.configurationFile=log4j2.xml -jar my.jar
The log4j2.xml has default loggers root log level set to INFO. But sometimes I need to specify DEBUG.
-Dlog4j.configurationFile=log4j2.xml does not work on Windows though it works fine on Mac and Linux. log4j2.xml does exist in current folder.
I get error when executing above mentioned command line with Windows PowerShell
Error: Could not find or load main class .configurationFile=log4j2.xml
I tried -Dlog4j.configurationFile=file://log4j2.xml or -Dlog4j.configurationFile=./log4j2.xml or -Dlog4j.configurationFile=file://<full_path_to_log4j2.xml> the same error
Define a property named "rootLevel" as
${sys:rootLevel:=INFO}
then on your root Logger specify
<Root level="${rootLevel}">
The root level will now default to Info and you can override it with -DrootLevel=DEBUG on the command line.
The problem is the dot in the property name. On Windows you need to put it in quotes.

Solr/Tika dataimport temporary files permission exception

I'm trying to setup data import from files using apache tika and solr. There are shared docs folder on nfs mounted share. Unfortunately, I can't perform dataimport, 1 file processed and then exception:
[http-8080-3] ERROR org.apache.solr.handler.dataimport.DocBuilder - Exception while processing: files document : null:org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to read content Processing Document # 2
at org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:71)
....
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.IOException: Access denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1006)
at java.io.File.createTempFile(File.java:1989)
at org.apache.tika.io.TemporaryResources.createTemporaryFile(TemporaryResources.java:66)
at org.apache.tika.io.TikaInputStream.getFile(TikaInputStream.java:533)
at org.apache.tika.io.TikaInputStream.getFileChannel(TikaInputStream.java:564)
at org.apache.tika.parser.microsoft.POIFSContainerDetector.getTopLevelNames(POIFSContainerDetector.java:373)
at org.apache.tika.parser.microsoft.POIFSContainerDetector.detect(POIFSContainerDetector.java:165)
at org.apache.tika.detect.CompositeDetector.detect(CompositeDetector.java:61)
at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:113)
at org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:140)
... 26 more
So it seems to be some problem with permissions while writing temporary files. Unfortunately, I have no idea where exactly tike tries to write that temporary files so I can't check permissions on nfs. I checked permission for tika home folder (core configuration) and docs folder and subfolders - all ok, including problematic document.
I also tried to change docs directory in my core config to other (on the same nfs share) and all is ok. So, do you have any idea how to track my issue?
[EDIT]
I just noticed that it's not really permission problem. Everything works for files .docx and .pdf. But on .doc file it fails. Do you have any ideas?

Error grabbing Grapes ... unresolved dependency ... not found

UPDATE 8/6:
The beefed up logging has shown me that there is an issue deleting the old jar from the cache, which leads to the fatal "not found" error. There are other threads similar to this, but only when someone is locking the file with their IDE. We are running a single groovy script from Jenkins, and no one is logged into this box.
We ran process explorer right after the failure and there were no locks. Then I login with the user that Jenkins is using to run the script, and I get no error deleting the files.
Also it seems there was a fix in IVY 2.1 to not fail when the jar cannot be deleted, and I'm on Ivy 2.2 (Groovy 1.8.4). What gives?
Couldn't delete outdated artifact from cache: C:\Users\myUser\.groovy\grapes\com.a.b.c\x-y-z\jars\x-y-z-1.496.jar
then the false(?) error:
Caught: java.lang.ExceptionInInitializerError
java.lang.ExceptionInInitializerError
Caused by: java.lang.RuntimeException: Error grabbing Grapes -- [unresolved dependency: com.a.b.c#x-y-z;1.+: not found]
at smokeTestSuccess.<clinit>(smokeTestSuccess.groovy)
Interestingly enough, this happens everyday the first time the script is run after 5am. I guess the cache gets invalidated through some default config at 5am? Is this some kind of clue??
Original post:
I am intermittently getting an error when running a number of different Groovy scripts which all share an identical #Grab declaration. (file names changed to protect the innocent). First the full Grab declaration:
#GrabResolver(name = 'libs.release', root = 'http://myserver:8081/artifactory/libs-release', m2compatible = 'true') #Grapes([
#Grab(group = 'com.a.b.c, module = 'x-y-z', version = '1.+', changing = true),
#Grab('commons-lang:commons-lang:2.3'),
#Grab('log4j:log4j:1.2.16'),
#Grab('gpars:gpars:0.12'),
#Grab('jsr166y:jsr166y:1.7.0'),
#Grab('org.codehaus.groovy.modules.http-builder:http-builder:0.6'),
#Grab('org.apache.commons:commons-collections:3.2.1'),
#Grab('org.apache.httpcomponents:httpclient:4.2.2'),
#Grab('org.apache.httpcomponents:httpcore:4.2.3'),
#Grab('org.cyberneko.html:nekohtml:1.9.17'),
#Grab('xerces:xercesImpl:2.11.0'),
]) #GrabConfig(systemClassLoader = true)
Then the error:
Caught: java.lang.ExceptionInInitializerError
java.lang.ExceptionInInitializerError
Caused by: java.lang.RuntimeException: Error grabbing Grapes -- [unresolved dependency: com.a.b.c#x-y-z;1.+: not found]
Upon doing numerous internet searches, the cause always seems to be very simple, either one of these two basic problems:
1. Repository unreachable
2. Jar file doesn’t exist
However, in the artifactory logs, I've proven that the file is actually being downloaded:
*Artifactory did accept the request for download:
2014-07-17 07:58:19,938 [ACCEPTED DOWNLOAD] libs-release-local:com/a/b/c/x-y-z/1.477/x-y-z-1.477.jar for anonymous/165.226.40.155.
*Artifactory did deliver jar:
20140717075820|156|REQUEST|165.226.40.155|non_authenticated_user|GET|/libs-release/com/a/b/c/x-y-z/1.477/x-y-z-1.477.jar|HTTP/1.1|200|1276695
The scripts all work about 100% of the time if they are simply restarted. This all leads me to believe that the issue is the Grab timing out. Theoretically the second time I run the script, the file is in the cache, and things happen faster, thus it doesnt fail.
For the above real request, I can see about 20 seconds of elapsed time in the http log from request to download.
Questions:
Does my theory seem correct?
Is there a way to increase the amount of time that the script will wait for the #Grab to resolve?
Does putting a try / catch block around the #Grab statements seem like a good idea? Or will that just hide the real problem?
thanks in advance!!!!
I think I finally figured out the answer to my own question.
I believe there is some sort of bug within Groovy 1.8.4 (or Ivy 2.2), especially since this behavior does mirror an ancient documented Ivy bug with this exact error message scheme and behavior.
Upgrading to Groovy 2.3.6 (which includes Ivy 2.3) appears to solve the issue.
I also still have no idea why the jars cannot be deleted, nothing is locking them. I experimented with moving the grape cache to a less secure folder to rule out a permission issue, but this didn't help:
-Dgrape.root=D:\Temp\grapeCache
UPDATE 8/19:
Once we upgraded to Groovy 2.3.6, the error went away, but I then figured out that the jar was no longer being downloaded at all, when using the "1.+" resolver. Something in the defaultgrapeConfig.xml was causing an issue. Everything is finally working properly after (in addition to the Groovy upgrade) we overrode defaultgrapeConfig.xml with our own stripped down file using this command line JAVA_OPT:
-Dgrape.config=D:\Temp\myGrapeConfig.xml
which had these contents:
<ivysettings>
<settings defaultResolver="downloadGrapes"/>
<resolvers>
<chain name="downloadGrapes">
</chain>
</resolvers>
</ivysettings>
ALSO:
For completeness (further steps):
In Jenkins GUI, update the job(s):
a. Update the drop down for each script: Execute Groovy Script > Groovy Version > Groovy-2.3.6
b. Update the JAVA_OPTS for each script (have to click the ‘advanced’ button under the script to see JAVA_OPTS):
-Dgrape.config=D:\Software\SfGrapeConfig.xml
Optional logging switches: -Dgroovy.grape.report.downloads=true -Divy.message.logger.level=4
In the actual Groovy script itself, delete this option within the #GrabResolver annotation: , m2compatible = 'true'
If you get this or a similar error:
"could not find client or server jvm under [Whatever JAVE_HOME is], please check that it is a valid jdk / jre containing the desired type of jvm"
Delete groovy.exe & groovyw.exe from D:\Software\Groovy-2.3.6\bin (if the exe’s do not exist, the Jenkins groovy plugin will use the bat file versions of these, and they handle the 32-bit / 64-bit problem better than the exe’s)

Cruisecontrol.net with UCM Clearcase - How to?

I am trying to configure Cruisecontrol.net for UCM Clearcase for the first time. Following is the sourceControl tag in the ccnet.config file:
<sourcecontrol type="clearCase">
<branch>123_India_Release</branch>
<autoGetSource>true</autoGetSource>
<viewName>admin_123_CRUISE</viewName>
<viewPath>$(ViewDirectory)</viewPath>
<useLabel>false</useLabel>
<useBaseline>false</useBaseline>
<executable>cleartool.exe</executable>
</sourcecontrol>
I constantly receive the following error:
ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control
operation failed: cleartool: Error: Not an object in a vob: "PATH TO
THE VIEW"
When I run cleartool from an arbitrary directory with the following parameters:
cleartool.exe lshist -r -nco -branch "123_India_Release" -since
05-Dec-2012.14:38:18 -fmt
I get the same error. But if I change the working directory to $(ViewDirectory) before running cleartool, it runs fine.
How should I make Cruisecontrol.net run cleartool.exe from the $(ViewDirectory)?
I have already tried adding <workingDirectory>$(ViewDirectory)</workingDirectory> tag before <executable>cleartool.exe</executable> but it did not work.
Any help would be appreciated.
EDIT 1:
As a workaround I have done the following:
<exec>
<executable>cleartool.exe</executable>
<baseDirectory>d:\Workspace\123_India_Release\VOB</baseDirectory>
<buildArgs>update -force</buildArgs>
<buildTimeoutSeconds>6000</buildTimeoutSeconds>
</exec>
I have added this to the tasks tag. I have configured an hourly trigger which does the following:
1) Update snapshot view
2) Build the VS 2010 solutions mentioned in the tasks tag.
The limitations are:
1) The trigger is hourly. I want it to be a commit based trigger.
2) This is a workaround
EDIT 2:
Further experimentation revealed that the ccnet.exe works fine. It does all that is needed. The issue is caused by the service ccservice.
I have stopped ccservice for now and started ccnet.exe. I plan to leave it running.
The View directory isn't enough: you must specify a vob.
See for instance:
"clearfsimport: Error: Not an object in a vob: "\"." (as an illustratio of that error message)
this thread (or this one): "You have to specify explicitly the VOB(s) to check for modification set"
The path should looks like:
<viewPath>Drive:\path\to\view\vobname</viewPath>
If your $(ViewDirectory) already references Drive:\path\to\view, then you could use:
<viewPath>$(ViewDirectory)\vobname</viewPath>

Resources