I am using the command: dotnet "myfile.dll"
Initially it was giving me this error: The user's home directory could not be determined. Set the 'DOTNET_CLI_HOME' environment variable to specify the directory to use.
Now after messing around with it, I have moved my files to c:/mydir, and it is giving this error: Failed to initialize CoreCLR, HRESULT: 0x80070057. I found this, but isn't c:/mydir a drive root?
Couple of things I noted:
I am able to run the .dll fine in a different directory.
Both directories contain same files.
The reason I want to run it in c:/mydir is because I am using AWS CodeDeploy, and that is where it copies the files (as far as I know; and the other directories are just the old versions where the files get copied from).
These issues are not linked (the first one I get from running a automated shell script after installation, and the second I get from manually trying to launch the .dll).
If someone could help me solve either one of these issues it would be greatly appreciated.
Try adding Environment=DOTNET_CLI_HOME=/temp to your Service declaration in your .service file. Example syntax:
[Service]
...
Environment=VARNAME=VARCONTENTS
So the actual like would look like this
Environment=DOTNET_CLI_HOME=/temp
Related
I am trying to deploy a python Lambda package with watson_developer_cloud sdk. Cryptography is one of many dependencies this package have. I have build this package on Linux machine. My package includes .libffi-d78936b1.so.6.0.4 hidden file too. But it is still not accessible to my lambda function. I am still getting 'libffi-d78936b1.so.6.0.4: cannot open shared object file' Error.
I have built my packages on Vagrant server, using instructions from here: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html#with-s3-example-deployment-pkg-python
Exact error:
Unable to import module 'test_translation': libffi-d78936b1.so.6.0.4: cannot open shared object file: No such file or directory
On a note, as explained in this solution, I have already created my package using zip -r9 $DIR/lambda_function.zip . instead of *. But it is still not working for me.
Any direction is highly appreciable.
The libffi-d78936b1.so.6.0.4 is in a hidden folder named .libs_cffi_backend.
So to add this hidden folder in your lambda zip, you should do something like:
zip -r ../lambda_function.zip * .[^.]*
That will create a zip file in the directory above with the name lambda_function.zip, containing all files in the current directory (first *) and every thing starting with .* but not ..* ([^.])
In a situation like this, I would invest some time setting up a local SAM environment so you can:
1 - Debug your Lambda
2 - Check what is being packaged and the files hierarchy
https://docs.aws.amazon.com/lambda/latest/dg/test-sam-cli.html
Alternatively you can remove this import and instrument your lambda function to print some of the files and directories it "sees".
I strongly recommend you giving SAM a try though, since it will make not only this debugging way easier but any further test you need to perform down the road. Lambdas are tricky to debug.
A little late, and I would comment on Frank's answer but not enough reputation.
I was including the the hidden directory .libs_cffi_backend in my deployment package, but for some reason Lambda could not find the libffi-d78936b1.so.6.0.4 file located within.
After copying this file into the same 'root' level directory as my lambda handler it was able to load the dependency and execute.
Also, make sure all the files in the deployment package are readable chmod -R 644 .
I managed to install STS 3.8.2 on Ubuntu 16.04 - with a lot of hacking experiments. I have it working, but I am not happy with my solution.
Here is what I had to do:
Extracted the tar file into /opt/sts-bundle.
If you put it anywhere else, like /opt/sts, the TC server fails to start from STS.
With files in /opt/sts-bundle, TC server still fails to start from STS - permission errors. To get it to work you need to futz around with permissions of the pivotal-c-server subdirectories, essentially you need to open it up your group (the same one running STS) (security hole ?).
A local install in your own ~/sts-bundle fails on "files not found" while attempting to backup - all the conf files. It still looks in /opt/sts-bundle for all these config files (just to copy them to /backup). You can change the top directory of the server in STS server properties - but it still looks in /opt/sts-bundle. Seems hard-coded - don't know where. So you have to create all the config files in the conf directory in the tree rooted at /opt/sts-bundle ("touch" works - creating empty files). TC Server still fails to start with a "failed to clean" error - with no clue from the detailed message what files are being "cleaned".
I tried creating a non-privileged user "tcserver" per suggestion from the Pivotal TC Server docs. I installed to /opt/sts-bundle, while logged in as tcserver (with sudo privileges). That fails when I am using STS as a regular developer that is not "tcserver". Could not figure out how to tell TC server to run under a different user than the one that started STS.
The solution I have working and I am not happy with, starts by extracting the tar.gz file into /opt/sts-bundle, as it wants. Then changing owner and group of sts-bundle to my id and my group (same ones that are used in STS UI). I am not happy with that. It seems wrong to put things in /opt that are owned by a single developer.
I am new to Linux, and I still have some Windows habits that need to be unlearned.
The question is: how do I get the clean solution (installing using a "tcserver" user in the global /opt directory) to work for developers who are not "tcserver"? How should the tcserver user be related to the developers (same group?).
Am I making this problem harder than it should be? What am I missing?
I'm not sure this what you want, but I don't install the STS bundles in some kind of shared directory as a special user at all. I just install it in my user.home dir, as myself, and launch it from there.
It is very unsophisticated. I just download the tar.gz file, unpack it in my home dir and then launch it from a trivial bash script which looks something like this:
#!/bin/bash
/home/kdvolder/Applications/sts-bundle/sts-*/STS
That script is on my PATH. So I can just type 'STS' in a terminal and STS will start.
I don't have to do anything else and it works.
If you are trying to somehow install this so that several different users can run a shared installation then this isn't a good setup. But I think for your own personal laptop or desktop which only you are using, this simple setup is perfectly fine.
For a shared-user env, unfortunately, I don't know how to help you. It could be complicated to sort out all the permissions issues etc because Eclipse is a complicated beast w.r.t to installation of plugins etc.
I am trying to publish a project to Windows Azure but get an error in the generated Microsoft.WindowsAzure.targets file related to length of paths and file names. How can I determine which is the problematic path or filename. The error relates to the "" tag in the generated file
Thanks
Martin
Already an older post - did you check this post?
Path too long error when building a windows azure service
This might reslove your issue.
Since this was still a problem for me years later and the above doesn't apply to Azure SDK v2.8, I was able to solve it by creating a symbolic link to my projects folder. Open up the command prompt as an administrator and run this:
mklink /D C:\Dev C:\Users\danzo\Source\Workspaces
Obviously you can change "C:\Dev" to whatever you want it to be and you'll need to change the longer path above to the root directory of your soltions/projects folder.
Should it generally be possible to run a program from the source directory (src) after having invoked ./configure and make (but not make install)? I'm trying to fix a bug in an application and it seems unnecessary to run make install after each code change. Unfortunately I can't run the application in the source directory since it tries to access files in the lib installation directory (which do not exist before make install). Is the application wrongly configured or do I have to reinstall it after each change to the source code?
It all depends on the application and what components or files it expects to be visible and where. But assuming no required configuration or dependencies, then yes, you can run the program in-place.
To add a directory to your lib search path, add to the environment variable LD_LIBRARY_PATH. Like so:
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/user/myproject/lib" ./someprogram
Note that specifiying a variable assignment on the command line in front of the program you run sets that variable for that run only. (Note, no semicolon -- this is a single command.) If you want to set the variable for the entire session, use
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/user/myproject/lib"
I'd recommend against this, though. It can lead to problems and confusion.
InnoSetup appears to be corrupting my executable when compiling the setup project.
Executing the source file works fine, but executing the file after installation produces Win32 error 1006 "The volume for a file has been externally altered".
I've tried disabling compression and setting various flags, to no avail.
Has anyone experienced this?
UPDATE
Okay there's been some twists to the situation:
At the moment, I can even manually copy a working file to the location it is installed to and get "The volume for a file...". To be clear: I uninstall the application, create the same folder and paste the files there and run.
UPDATE 2
Some more details for those that want it:
The InnoSetup script is compiled by FinalBuilder using output from msbuild, also executed by FinalBuilder, running on my machine with XP SP3. The executable is a C# .Net assembly compiled in configuration Release|AnyCPU. The file works when executed in the folder the Install Script takes it from. It produces the same behaviour on an XP Virtual Machine. The MD5 hashes of the source file and the installed file are the same.
Ok, I just received this same error. I have a config which my executable uses. I looked in my folder a million times - but finally notice the config file was zero length. I corrected the config and the error stopped occurring.
Check the simplest things first... good lucK!
ERROR_FILE_INVALID
1006 (0x3EE): The volume for a file has been externally altered so that the opened file is no longer valid.
I suspect you're having this issue after moving the files to a network share. It seems to me that what's happening is you have an open file-handle - possibly to a temporary file you are creating - and then some other process (perhaps running on a different host) is coming along and renaming or deleting that file or its' parent directory tree.
So my advice is:
Try installing to a local directory
Run after an anti-virus scan, in
safe-mode or on a different machine
to see if there isn't some
background nasty changing
volume/directory properties while
your program is running.
Make sure the program itself isn't doing anything weird with the volume or directory tree you're working with.
Never seen that before. I've got a few questions and suggestions:
- Are you signing the EXE during the compile of the setup? If so, try leaving that part out.
- WHat OS are you installing on or does it happen on all machines you've tried?
- Run the install with the /LOG="c:\install.log" option and post the log. It might show something happening during install.
- Run a byte compare or MD5 check on the source EXE and the installed EXE. Are they the same? Do they have the same version resource?