i'm on nixos 20.09 and try to build godot-3.2.3-stable with support for C#.
I'm following the instruction on the godot docs, which states that one first has to enable the mono module and then generate the mono-glue. I tried to adapt the default.nix of the NixOS/nixpkgs repository to my needs but failed.
What i tried is to first generate the glue like mentioned in the godot docs and then compile with C# support.
with import <nixpkgs> {};
let options = {
pulseaudio = true;
};`
in
stdenv.mkDerivation rec {
name = "tools_";
version = "3.2.3";
src = ./godot-3.2.3-stable.tar.gz;
nativeBuildInputs = with pkgs; [ pkg-config ];
buildInputs = with pkgs; [ scons mono6 msbuild dotnetPackages.Nuget xorg.libX11 xorg.libXcursor
xorg.libXinerama xorg.libXrandr xorg.libXrender xorg.libXi xorg.libXext
xorg.libXfixes freetype openssl alsaLib libpulseaudio libGLU zlib yasm
];
patches = [
./pkg_config_additions.patch
./dont_clobber_environment.patch
];
enableParallelBuilding = true;
sconsFlags = "target=release_debug platform=x11 tools=yes module_mono_enabled=yes mono_glue=no";
preConfigure = ''
sconsFlags+=" ${lib.concatStringsSep " " (lib.mapAttrsToList (k: v: "${k}=${builtins.toJSON v}") options)}"
'';
outputs = [ "out" ];
installPhase = ''
mkdir -p "$out/bin"
cp bin/godot.* $out/bin/godot
'';
}
But if i run this with nix-build i get the error:
Checking for `thread_local` support... supported
Mono root directory not found. Using pkg-config instead
/nix/store/vnyfysaya7sblgdyvqjkrjbrb0cy11jf-bash-4.4-p23/bin/sh: pkg-config: command not found
OSError: 'pkg-config monosgen-2 --libs-only-L' exited 127:
File "/build/godot-3.2.3-stable/SConstruct", line 617:
SConscript("modules/SCsub")
File "/nix/store/ypb1lsl610mw3s76p232hjmnhwakjb06-scons-4.0.1/lib/python3.8/site-packages/SCons/Script/SConscript.py", line 661:
return method(*args, **kw)
File "/nix/store/ypb1lsl610mw3s76p232hjmnhwakjb06-scons-4.0.1/lib/python3.8/site-packages/SCons/Script/SConscript.py", line 598:
return _SConscript(self.fs, *files, **subst_kw)
File "/nix/store/ypb1lsl610mw3s76p232hjmnhwakjb06-scons-4.0.1/lib/python3.8/site-packages/SCons/Script/SConscript.py", line 287:
exec(compile(scriptdata, scriptname, 'exec'), call_stack[-1].globals)
File "/build/godot-3.2.3-stable/modules/SCsub", line 21:
SConscript(name + "/SCsub") # Built-in.
File "/nix/store/ypb1lsl610mw3s76p232hjmnhwakjb06-scons-4.0.1/lib/python3.8/site-packages/SCons/Script/SConscript.py", line 661:
return method(*args, **kw)
File "/nix/store/ypb1lsl610mw3s76p232hjmnhwakjb06-scons-4.0.1/lib/python3.8/site-packages/SCons/Script/SConscript.py", line 598:
return _SConscript(self.fs, *files, **subst_kw)
File "/nix/store/ypb1lsl610mw3s76p232hjmnhwakjb06-scons-4.0.1/lib/python3.8/site-packages/SCons/Script/SConscript.py", line 287:
exec(compile(scriptdata, scriptname, 'exec'), call_stack[-1].globals)
File "/build/godot-3.2.3-stable/modules/mono/SCsub", line 34:
mono_configure.configure(env, env_mono)
File "/build/godot-3.2.3-stable/modules/mono/build_scripts/mono_configure.py", line 351:
tmpenv.ParseConfig("pkg-config monosgen-2 --libs-only-L")
File "/nix/store/ypb1lsl610mw3s76p232hjmnhwakjb06-scons-4.0.1/lib/python3.8/site-packages/SCons/Environment.py", line 1612:
return function(self, self.backtick(command))
File "/nix/store/ypb1lsl610mw3s76p232hjmnhwakjb06-scons-4.0.1/lib/python3.8/site-packages/SCons/Environment.py", line 599:
raise OSError("'%s' exited %d" % (command, status))
builder for '/nix/store/acpp99hr7q92a7j1bk2nga00n4v8za7z-tools_.drv' failed with exit code 2
error: build of '/nix/store/acpp99hr7q92a7j1bk2nga00n4v8za7z-tools_.drv' failed
I'm sorry but i'm totally lost there, since this is my first derivation i am trying to build. I have already searched the nixpkgs manual but couldn't solve my problem. Probably due to the fact that i'm not exactly sure what i am looking for. Also i was asking myself if this is going to work anyways since the godot docs state that after generating the glue one has to:
### Build binaries normally
# Editor
scons p=x11 target=release_debug tools=yes module_mono_enabled=yes
and i don't know if i could then simply rerun the above mentioned derivation with the appropriate sconsFlags or if and how i could simply write one file which does the same job done.
If someone could help me and point me in the right direction so that i at least would know what i exactly have to search for i would very much appreciate it. Thanks in advance.
Related
I am running my ansible on an AWS EC2 linux machine which connects to another AWS EC2 Windows machine to copy a file to S3 bucket
my tasks/main.yml file looks like below
---
# tasks file for postgres
- name: Simple PUT operation
amazon.aws.aws_s3:
bucket: codepipeline-artefact-12344555-abc
object: /test.txt
src: "C:\\teststore-selenium\\test.txt"
mode: put
ansible.cfg file looks like below
[defaults]
log_path = /var/log/ansible.log
ansible_python_interpreter = /usr/bin/python
and hosts file looks like below
[win]
10.x.x.5:5986
[win:vars]
ansible_user=Administrator
ansible_port=5986
ansible_password=****PasswordRemoved****
ansible_connection=winrm
ansible_winrm_server_cert_validation=ignore
ansible_winrm_transport=basic
ansible_python_interpreter="c:\users\administrator\appdata\local\programs\python\python310"
#ansible_winrm_read_timeout_sec=60
#ansible_winrm_operation_timeout_sec=60
ansible_shell_type=powershell
ansible_shell_executable=None
I am getting error when running the playbook. Error below
[WARNING]: log file at /var/log/ansible.log is not writeable and we cannot create it, aborting
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.6.8 (default, Aug
12 2021, 07:06:15) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]. This feature will be removed from ansible-core in version 2.12. Deprecation warnings
can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible-playbook [core 2.11.7]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ec2-user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
ansible collection location = /home/ec2-user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.6.8 (default, Aug 12 2021, 07:06:15) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.1
libyaml = True
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
redirecting (type: action) ansible.builtin.win_copy to ansible.windows.win_copy
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: role_postgres.yml ***********************************************************************************************************************
1 plays in role_postgres.yml
PLAY [all] ****************************************************************************************************************************************
TASK [postgres : Simple PUT operation] ************************************************************************************************************
task path: /etc/ansible/roles/postgres/tasks/main.yml:10
<10.x.x.5> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO 10.x.x.5
EXEC (via pipeline wrapper)
EXEC (via pipeline wrapper)
Using module file /usr/local/lib/python3.6/site-packages/ansible_collections/amazon/aws/plugins/modules/aws_s3.py
Pipelining is enabled.
EXEC (via pipeline wrapper)
EXEC (via pipeline wrapper)
The full traceback is:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/action/__init__.py", line 1176, in _parse_returned_data
filtered_output, warnings = _filter_non_json_lines(res.get('stdout', u''), objects_only=True)
File "/usr/local/lib/python3.6/site-packages/ansible/module_utils/json_utils.py", line 57, in _filter_non_json_lines
raise ValueError('No start of json char found')
ValueError: No start of json char found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/usr/local/lib/python3.6/site-packages/ansible/executor/task_executor.py", line 582, in _execute
result = self._handler.run(task_vars=variables)
File "/usr/local/lib/python3.6/site-packages/ansible_collections/amazon/aws/plugins/action/aws_s3.py", line 63, in run
result = merge_hash(result, self._execute_module(module_args=new_module_args, task_vars=task_vars, wrap_async=wrap_async))
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/action/__init__.py", line 1116, in _execute_module
data = self._parse_returned_data(res)
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/action/__init__.py", line 1200, in _parse_returned_data
match = re.compile('%s: (?:No such file or directory|not found)' % self._used_interpreter.lstrip('!#'))
File "/usr/lib64/python3.6/re.py", line 233, in compile
return _compile(pattern, flags)
File "/usr/lib64/python3.6/re.py", line 301, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib64/python3.6/sre_compile.py", line 562, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib64/python3.6/sre_parse.py", line 855, in parse
p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib64/python3.6/sre_parse.py", line 416, in _parse_sub
not nested and not items))
File "/usr/lib64/python3.6/sre_parse.py", line 502, in _parse
code = _escape(source, this, state)
File "/usr/lib64/python3.6/sre_parse.py", line 362, in _escape
raise source.error("incomplete escape %s" % escape, len(escape))
sre_constants.error: incomplete escape \u at position 2
fatal: [10.209.31.5]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
Not sure what's wrong if you could assist. Thanks in advance
You're getting complaints about the json formatting of a return value. That usually happens to me when the request is malformed. I see you have 'test.txt' both in the src and object fields of tasks/main.yml; check if this is redundant.
Next, I would do a direct query against AWS for that object with these credentials and see what the response is.
When attempting to install the Octave io package on Windows 7 I receive the following error:
>> pkg install io-2.4.12.tar.gz
0 [main] us 0 init_cheap: VirtualAlloc pointer is null, Win32 error 487
AllocationBase 0x0, BaseAddress 0x60EA0000, RegionSize 0x190000, State 0x10000
C:\Octave\Octave-4.2.0\bin\perl.exe: *** Couldn't reserve space for cygwin's heap, Win32 error 0
warning: doc_cache_create: unusable help text found in file 'getxmlattv'
For information about changes from previous versions of the io package, run 'news io'.
>>
I have a read a few other similar questions but have been unable to find an answer, and I do not understand the error message in all honesty. I understand cygwin in a Unix-like interface for windows, but that is about it. I have tried running as administrator, deleting all temp files, restarting etc. but cannot get the pkg to install successfully. Any ideas?
The referenced file 'getxmlattv' is just a function and not a text file as described and cannot be run as a standalone file:
function [retval] = getxmlattv (xmlnode, att)
retval = '';
## Get end of first tag
iend = index (xmlnode, ">");
## Get start of value string. Concat '="' to ensure minimal ambiguity
vals = index (xmlnode, [att '="']);
if (vals == 0)
## Attribute not in current tag
return
elseif (vals)
vals = vals + length (att) + 2;
vale = regexp (xmlnode(vals:end), '"[ >/]');
if (! isempty (vale))
retval = xmlnode(vals:vals+vale-2);
endif
endif
endfunction
I don't see how this is of any use.
For the windows version of octave the packages are already available in the installer
https://wiki.octave.org/Octave_for_Microsoft_Windows
so you don't need to install io from source package. Just re-run the installer
if you missed it.
Please note that the error message is misleading as you are not using the
cygwin version of octave but msys/mingw one; that is based on a modified
version of the cygwin1.dll where they forgot to update the messages:
$ strings msys-1.0.dll | grep cygwin
...
%P: *** Couldn't reserve space for cygwin's heap (%p <%p>) in child, %E
...
I'm trying to get AutoConfigBuilder working, but I'm having no luck. I'm using SCons v2.3.4.
My SConstruct:
env = Environment(tools = ['default','UnTar','AutoConfig'])
configured = env.AutoConfig('libpcap-1.6.2')
Output:
scons: Reading SConscript files ...
TypeError: Tried to lookup Dir 'libpcap-1.6.2' as a File.:
File "/home/jreinhart/git_repos/scons-test/autoconf_builder/SConstruct", line 4:
configured = env.AutoConfig('libpcap-1.6.2')
File "/usr/lib/scons/SCons/Environment.py", line 260:
return MethodWrapper.__call__(self, target, source, *args, **kw)
File "/usr/lib/scons/SCons/Environment.py", line 224:
return self.method(*nargs, **kwargs)
File "/usr/lib/scons/SCons/Builder.py", line 633:
return self._execute(env, target, source, OverrideWarner(kw), ekw)
File "/usr/lib/scons/SCons/Builder.py", line 554:
tlist, slist = self._create_nodes(env, target, source)
File "/usr/lib/scons/SCons/Builder.py", line 484:
slist = env.arg2nodes(source, source_factory)
File "/usr/lib/scons/SCons/Environment.py", line 486:
v = node_factory(self.subst(v, **kw))
File "/usr/lib/scons/SCons/Node/FS.py", line 1340:
return self._lookup(name, directory, File, create)
File "/usr/lib/scons/SCons/Node/FS.py", line 1319:
return root._lookup_abs(p, fsclass, create)
File "/usr/lib/scons/SCons/Node/FS.py", line 2224:
result.must_be_same(klass)
File "/usr/lib/scons/SCons/Node/FS.py", line 627:
(self.__class__.__name__, self.path, klass.__name__))
Any advice on how to start working on this recipe? I'm not even sure what exactly is failing here. I'm assuming for some reason SCons thinks that when I call the builder like env.AutoConfig('libpcap-1.6.2'), that I'm referring to a file, not a directory.
I've posted this to the scons mailing list, but I'd imagine Stack Overflow gets more traffic.
From what I could find this is a bug in SCons itself. I'm not sure if/when it'll be fixed.
This is my workaround: force the parameter to be a SCons Dir object because SCons isn't getting it right.
Try this:
env.AutoConfig(env.Dir('libpcap-1.6.2'))
I am trying to make YCM plugin of vim to work for CUDA source files.
Since CUDA is basically C++ syntax with some extensions, I thought that editing the standard '.ycm_extra_conf.py' file would be sufficient. I changed the line
SOURCE_EXTENSIONS = [ '.cpp', '.cxx', '.cc', '.c', '.m', '.mm']
to
SOURCE_EXTENSIONS = [ '.cpp', '.cxx', '.cc', '.c', '.m', '.mm', '.cu' ]
and the line
return extension in [ '.h', '.hxx', '.hpp', '.hh']
to
return extension in [ '.h', '.hxx', '.hpp', '.hh', '.cuh' ]
But YCM does not work, it does not even ask me to use the config file as it should in the beginning. In normal C/C++ source files YCM works correct.
Any ideas what is missing?
I got this working by the following steps:
First remap .cu files to cpp in your .vimrc
" Map cuda files to c++ so that Ycm can parse
autocmd BufNewFile,BufRead *.cu set filetype=cpp
Next update .ycm_extra_conf.py with flags for Clang CUDA support.
import os
import ycm_core
includes = ['-I/opt/cudatoolkit/6.5/include', '-I/your/includes/here']
common = ['-std=c++11',
'-DUSE_CLANG_COMPLETER',
'-I/usr/local/include',
'-I/usr/include/clang/3.5/include',
'-I/usr/include/x86_64-linux-gnu',
'-I/usr/bin/../lib/gcc/x86_64-linux-gnu/4.9/include',
'-I/usr/include',
'-I/usr/include/c++/4.9']
cpp_flags = ['-x', 'c++',]
# http://llvm.org/docs/CompileCudaWithLLVM.html
cuda_flags = ['-x', 'cuda', '--cuda-gpu-arch=sm_35']
def FlagsForFile( filename ):
compile_flags = cpp_flags
if filename.endswith('.cu'):
compile_flags = cuda_flags
compile_flags.extend(common)
compile_flags.extend(includes)
return {
'flags': compile_flags,
'do_cache': True
}
Finally you need to add in a header file to your .cu file so Ycm can parse the CUDA builtins. This file, cuda_builtin_vars.h was in my local Clang build.
#ifdef __clang__
#include <cuda_builtin_vars.h>
#endif
Even with all this, the Clang parser still doesn't seem to accept that my __global__ functions are actually __global__ (even though it can handle the kernel call syntax with any problems), so I usually wrap them with #ifndef __clang__
Sources:
https://github.com/Valloric/YouCompleteMe/issues/1766
http://llvm.org/docs/CompileCudaWithLLVM.html
https://github.com/Microsoft/clang-1/blob/master/test/SemaCUDA/kernel-call.cu
At the moment I'm using some magic to get the current git revision into my scons builds.. I just grab the version a stick it into CPPDEFINES.
It works quite nicely ... until the version changes and scons wants to rebuild everything, rather than just the files that have changed - becasue the define that all files use has changed.
Ideally I'd generate a file using a custom builder called git_version.cpp and
just have a function in there that returns the right tag. That way only that one file would be rebuilt.
Now I'm sure I've seen a tutorial showing exactly how to do this .. but I can't seem to track it down. And I find the custom builder stuff a little odd in scons...
So any pointers would be appreciated...
Anyway just for reference this is what I'm currently doing:
# Lets get the version from git
# first get the base version
git_sha = subprocess.Popen(["git","rev-parse","--short=10","HEAD"], stdout=subprocess.PIPE ).communicate()[0].strip()
p1 = subprocess.Popen(["git", "status"], stdout=subprocess.PIPE )
p2 = subprocess.Popen(["grep", "Changed but not updated\\|Changes to be committed"], stdin=p1.stdout,stdout=subprocess.PIPE)
result = p2.communicate()[0].strip()
if result!="":
git_sha += "[MOD]"
print "Building version %s"%git_sha
env = Environment()
env.Append( CPPDEFINES={'GITSHAMOD':'"\\"%s\\""'%git_sha} )
You don't need a custom Builder since this is just one file. You can use a function (attached to the target version file as an Action) to generate your version file. In the example code below, I've already computed the version and put it into an environment variable. You could do the same, or you could put your code that makes git calls in the version_action function.
version_build_template="""/*
* This file is automatically generated by the build process
* DO NOT EDIT!
*/
const char VERSION_STRING[] = "%s";
const char* getVersionString() { return VERSION_STRING; }
"""
def version_action(target, source, env):
"""
Generate the version file with the current version in it
"""
contents = version_build_template % (env['VERSION'].toString())
fd = open(target[0].path, 'w')
fd.write(contents)
fd.close()
return 0
build_version = env.Command('version.build.cpp', [], Action(version_action))
env.AlwaysBuild(build_version)