I have a c# source file. Is there any way to put something like #!/usr/bin/env mono, so it will be compiled and then run as executable:
For python for example, i'll do like this:
#!/usr/bin/env python
In fact, what I want is to run the script without calling "mono the.exe", after compiling. I want something like "./the.exe".
EDIT: I just noticed you want to do this for a single source fileāfor a single source file. This is almost supported by the csharp REPL that ships with Mono. However, the REPL spits out a syntax error because it doesn't understand the shebang line and sees it as a preprocessor definition. If I misunderstood and you were talking about a compiled assembly, the below text still applies. /EDIT
You can't use shebangs, because .exe files produced by Mono are PE executables, just like on Windows. They contain CIL, not a script.
What you can do though is produce a small shell script that runs mono your.exe and use that, or you can use the Linux kernel's binfmts support, as outlined here.
I think you may be able to use update-binfmts to register mono as the interpreter for (compiled) mono programs.
Try update-binfmts --display and see if the output includes something like:
cli (enabled):
package = mono-common
type = magic
offset = 0
magic = MZ
mask =
interpreter = /usr/bin/cli
detector = /usr/lib/cli/binfmt-detector-cli
Related
I would write PROTON_FORCE_LARGE_ADDRESS_AWARE=1 to active LARGE_ADDRESS_AWARE for all executable on the wine bottle, if I were to use steam's proton. What's its equivalent command for standalone wine or lutris?
A quick look at the source code reveals these lines:
self.check_environment("PROTON_FORCE_LARGE_ADDRESS_AWARE", "forcelgadd")
if "forcelgadd" in self.compat_config:
self.env["WINE_LARGE_ADDRESS_AWARE"] = "1"
As such, it looks like the equivalent is WINE_LARGE_ADDRESS_AWARE.
Well the idea goes as followed,
I have a bash file for linux, there I obviously run it by making ./my_run.
The problem is I'm in windows so I downloaded and installed cygwin.
I added cygwin bin to the Enviromental Variables and check that at least "ls" works so I guessed I did it well.
When I try to run it with the cmd it displays:
'.' is not recognized as an internal or external command,
operable program or batch file.
As if the cygwin variables were not correctly installed (as I said I tried ls and works).
Then I tried it directly with cygwin and when doing the ./my_run I got it to work right.
So how is that I can use some commands like ls but when doing ./ it doesn't work on the cmd? How can I fix this?
Well, cygwin is only a shared library and a lot of stuff (the programs) using it (read Cygwin doc). cygwin.dll changes internally path resolution / chars to allow you to say ./my_script and converts it to .\my_script before doing the actual windows call, it also adds the proper extension to executables to allow it to execute windows binaries. This magic persists as long as you use it. cmd.exe is a Microsoft Windows command shell that is completely unaware of Cygwin's shared library and by that reason it doesn't use it, so it will not call it for path translation, even if you populate the environment of zetabytes of stuff. When you run in Cygwin terminal, you are running bash shell, which is a Cygwin executable, linked to cygwin.dll. It manages to use Cygwin library for all the unix system call emulations, so when you pass it e.g. to exec("./my_script", ...);, it internally converts that to try for ./my_script, then .\my_script, ./my_script.exe, ... and the same for .com and .bat extensions.
This fact often makes some people to say that Cygwin is not a good, efficient, environment. But the purpose was not to be efficient (and it is, as it caches entries and makes things best to be efficient) but to be compatible.
In your example ls is a Cygwin executable that mimics the /bin/ls executable from unix systems. It uses the Cygwin library, so all path resolution will be properly made (well, under some constraints, as you'll see after some testing) and everything will work fine. But you cannot pretend all your Windows applications to suddenly transform themselves and begin working as if they where in a different environment. This requires some try and error approach that you have to try yourself. And read Cygwin documentation, it is very good and covers everything I've said here.
If you open up Cygwin and run the command there you should be fine.
I have a script file that I was given to run in windows using Cygwin. When I try to use this file I get the following error
-bash: /sigdet/filename: cannot execute binary file: Exec format error.
sigdet is the folder within the Cygwin directory that I have the script. Rawdata is the name of the directory with the raw data files that the script is supposed to analyze.
To try and solve this, I have changed the file permissions, I have checked to make sure that it is on a 64 bit machine and the script appears to have compiled on a 64-bit machine. After these steps, I don't know what else the problem could be. Here are the commands I've entered:
I first changed the directory like so:
$ cd /sigdet/
Then I ran the script that is suppsed to work:
$ /sigdet/filename -i rawdata
Does the script file need to have an extension in windows? I've tried changing it to a .sh extension with no luck. I'm told that it just works on other windows machines just how it is.
Thanks to anyone that can help with this.
Your file is not an executable. It most probably contains ELF executable which is designed for Linux operating system, or it's corrupt.
If your file was a shell script, or in fact anything that contained plain text, you'd get different errors (such as, "expected command name" or "unknown command: XYZ" etc.)
Scripts are not supposed to have file extensions, like any executables. On the other hand, they should have shebangs: small text located in the first line that tells the system the path to the interpreter. For example, a Python executable script might be named whatever and have #!/usr/bin/python3 or similar in the first line. When you run it through ./whatever in the shell, it'll look for python3 in /usr/bin and run your file like this: /usr/bin/python3 ./whatever. (In fact, thanks to this you can also specify additional parameters that get passed to the interpreter.)
There is also a chance that your script is valid, but it contains a shebang pointing to bad interpreter. If that is the case, then most likely the path is correct, otherwise you'd get /whatever/interpreter: bad interpreter: no such file or directory error or similar. But then, all the other points apply to the interpreter (which is just another executable...)
If the script and/or interpreter was meant to be executed on Windows or Cygwin at least, it should either contain aforementioned shebang (#!/path in the first name) or it should be Windows executable (in which case the file data should begin with MZ letters, you can inspect it in notepad.) If it isn't, it means the files you were given can't run on Cygwin.
Had this same problem. Added the following at the top of makefile:
export ARCH = CYGNUS
What happened during the make process is that Linux and Windows versions of the executables were created. You just have to use ./.exe versions.
In my case, I got the error when I used a wrong command to compile my C program. When I used the right command:
gcc myprog.c -o myprog.exe
the error was resolved.
We've been using protocol buffers, and are generating the c++ and python files with protoc, and the c# files with protobuf-csharp-port. At the moment these are done separately, the c++ and python from linux and the c# from windows. We'd like to have one script generate all of these, running in linux.
To do this I'm trying to run ProtoGen.exe with mono, but it's not producing any output. The following command runs, but produces no cs files, and no errors.
mono ../cs/Packages/Google.ProtocolBuffers/tools/ProtoGen.exe --protoc_dir=/usr/local/bin/ ./subdir/simple_types.proto
I've got a feeling that I'm missing something simple.
I don't think I've tried running protoc from ProtoGen.exe on Linux. I'm surprised that it doesn't have any errors, but we can definitely look into that. (If you fancy raising an issue, that would be really helpful - or I'll do it when I get the chance.)
For the moment, I suggest that you run protoc first, using --descriptor_set_out to produce a binary (protobuf) version of the .proto file. That's what ProtoGen.exe is trying to do first, and failing by the sounds of it.
Once you've got the binary version of your message descriptor (I'd call it simple_types.pb), you can run ProtoGen.exe on that. It's been a while since I've done this, but I believe you should be able to just run
mono ../cs/Packages/Google.ProtocolBuffers/tools/ProtoGen.exe ./subdir/simple_types.pb
... and it should magically work.
As a horrible alternative, you could try symlinking protoc.exe to protoc in your binary directory. Fundamentally I suspect that's what's going wrong :)
my script
protoc "--proto_path=$SRC_DIR" "--descriptor_set_out=x.protobin" --include_imports $SRC_DIR/x.proto
mono $PRJ_HOME/Google.ProtocolBuffers.2.4.1.521/tools/ProtoGen.exe -line_break=Unix x.protobin
protoc and mono were installed via distrib package manager :
# archlinux
pacman -S protobuf mono
I have a Haskell script that runs via a shebang line making use of the runhaskell utility. E.g...
#! /usr/bin/env runhaskell
module Main where
main = do { ... }
Now, I'd like to be able to determine the directory in which that script resides from within the script, itself. So, if the script lives in /home/me/my-haskell-app/script.hs, I should be able to run it from anywhere, using a relative or absolute path, and it should know it's located in the /home/me/my-haskell-app/ directory.
I thought the functionality available in the System.Environment module might be able to help, but it fell a little short. getProgName did not seem to provide useful file-path information. I found that the environment variable _ (that's an underscore) would sometimes contain the path to the script, as it was invoked; however, as soon as the script is invoked via some other program or parent script, that environment variable seems to lose its value (and I am needing to invoke my Haskell script from another, parent application).
Also useful-to-know would be whether I can determine the directory in which a pre-compiled Haskell executable lives, using the same technique or otherwise.
As I understand it, this is historically tricky in *nix. There are libraries for some languages to provide this behavior, including FindBin for Haskell:
http://hackage.haskell.org/package/FindBin
I'm not sure what this will report with a script though. Probably the location of the binary that runhaskell compiled just prior to executing it.
Also, for compiled Haskell projects, the Cabal build system provides data-dir and data-files and the corresponding generated Paths_<yourproject>.hs for locating installed files for your project at runtime.
http://www.haskell.org/cabal/release/cabal-latest/doc/users-guide/authors.html#paths-module
There is a FindBin package which seems to suit your needs and it also works for compiled programs.
For compiled executables, In GHC 7.6 or later you can use System.Environment.getExecutablePath.
getExecutablePath :: IO FilePathSource
Returns the absolute pathname of the current executable.
Note that for scripts and interactive sessions, this is the path to the
interpreter (e.g. ghci.)
There is executable-path which worked with my runghc script. FindBin didn't work for me as it returned my current directory instead of the script dir.
I could not find a way to determine script path from Haskell (which is a real pity IMHO). However, as a workaround, you can wrap your Haskell script inside a shell script:
#!/bin/sh
SCRIPT_DIR=`dirname $0`
runhaskell <<EOF
main = putStrLn "My script is in \"$SCRIPT_DIR\""
EOF