What is a .clientrc file? - node.js

You can find a mention of it here: https://code.visualstudio.com/api/language-extensions/language-server-extension-guide
however without any explanation. I couldn't find information about it by googling.
What is the purpose of a .clientrc file?

As mentioned in the comments, ".clientrc" was just chosen as an example file name.
If you google site:code.visualstudio.com clientrc, that page is the only search result. And googling site:github.com/microsoft/vscode "clientrc", all the results are about example code (and most are about that example code specifically).
The .*rc file name pattern is a common convention for application-specific configuration files. There's a baeldung article on it:
Names including rc often signify files or directories of files with code. Specifically, this code consists of commands that are meant to run when a program is executed. Indeed, that program can be an application, but it can also be a whole operating system.
Because of this, the original rc affix and extension both meant “run commands”. In particular, a widely accepted source of the term is the Compatible Time-Sharing System (CTSS)

Related

Passing filenames to python.subprocess across different platforms

I have a (cross-platform) script which invokes Excel/LibreOffice to open a file. Under Linux I don't have any problem, but with Windows there are problems when spaces are in the path of the file. This is a known issue, I've read quite some posts, but none seems to answer the specifics. Here is what I do:
subprocess.run([self.settings.excel, path])
The problem I end up with is the following: Excel complains that the file Your%20Preferred%20Path%20With%20Spaces cannot be found (obviously) although the parameter I give to subprocess isn't url-encoded like this.
So: who is garbaging my string ? Is it Excel, subprocess…?
I have found a workaround, using os.startfile, but it is not cross-platform. While having to execute different code according to the underlying platform may work as a patch, caring about so low-level details is hardly pythonic.
Therefore: what should I do to have a cross-platform, standard code solving this issue (and avoiding big libs like webbrowser if at all possible)? How could I build path so that it is passed to Excel in the expected format? 

How to find which Yocto Project recipe populates a particular file on an image root filesystem

I work with the Yocto Project quite a bit and a common challenge is determining why (or from what recipe) a file has been included on the rootfs. This is something that can hopefully be derived from the build system's environment, log & meta data. Ideally, a set of commands would allow linking a file back to a source (ie. recipe).
My usual strategy is to perform searches on the meta data (e.g. grep -R filename ../layers/*) and searches on the internet of said filenames to find clues of possible responsible recipes. However, this is not always very effective. In many cases, filenames are not explicitly stated within a recipe. Additionally, there are many cases where a filename is provided by multiple recipes which leads to additional work to find which recipe ultimately supplied it. There are of course many other clues available to find the answer. Regardless, this investigation is often quite laborious when it seems the build system should have enough information to make resolving the answer simple.
This is exact use case for oe-pkgdata-util script and its subcommand find-path. That script is part of openembedded-core.
See this example (executed in OE build environment, i.e. bitbake works):
tom#pc:~/oe/build> oe-pkgdata-util find-path /lib/ld-2.24.so
glibc: /lib/ld-2.24.so
You can clearly see that this library belongs to glibc recipe.
oe-pkgdata-util has more useful subcommands to see information about packages and recipes, it worth to check the --help.
If you prefer a graphical presentation, the Toaster web UI will also show you this, plus dependency information.
The candidate files deployed for each recipe are placed in each $WORKDIR/image
So you can cd to
$ cd ${TMPDIR}/work/${MULTIMACH_TARGET_SYS}
and perform a
$ find . -path '*/image/*/fileYouAreLookingFor'
from the result you should be able to infer the ${PN} of the recipe which deploys such file.
For example:
$ find . -path '*/image/*/mc'
./bash-completion/2.4-r0/image/usr/share/bash-completion/completions/mc
./mc/4.8.18-r0/image/usr/share/mc
./mc/4.8.18-r0/image/usr/bin/mc
./mc/4.8.18-r0/image/usr/libexec/mc
./mc/4.8.18-r0/image/etc/mc

Finding the default project settings file

I'm trying to write a plugin for 3ds max, I went through the entire sdk installation process to the letter as described in the help files.
The problem I'm facing though is intellisence complaining about an invalid macro definition
"IntelliSense: command-line error: invalid macro definition:_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT =1"
I found the definition in project settigs -> c/c++ -> preprocessor definitions as inherited from parent or project default.
I tried disabling the inherited definitions and re-entered them, this time without the space between the name and the = and all works fine so I'm guessing its a typo on their part?
Anyway, I want to change the default project or whatever to not repeat it every time i start a new project. The project is created with a wizard which required me to copy over some files to appear and after which I had to enter the sdk path.
The files I copied are plain text with some fancy extensions and not much in them so I'm guessing the defaults are described in the sdk directory.. somewhere. Does anybody know what kind of a file I'm looking for?
EDIT: I found a file called root.vcxproj_template and it has a section for preprocessor definitions but all it contains is
<PreprocessorDefinitions>_USRDLL;%(PreprocessorDefinitions)</PreprocessorDefinitions>
and no mention of the broken one
EDIT2: in another part of the file there was a path to a property sheet (maxsdk\ProjectSettings\propertySheets\3dsmax.common.tools.settings) which included the faulty definition. I fixed it an no more complaints from VS.
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT = 1 means that compiler should replace all old C run-time routines such as sprintf, strcpy, strtok with new versions such as strprintf_s, strcpy_s, strtok_s and similar. It goes in pair with following definition _CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES = 1.
More you can find here: (MSDN) https://msdn.microsoft.com/en-us/library/ms175759.aspx. However I tried to use this but without success. It says that you can use this only for statically allocated buffers like char buffer[32], but compilers was still complaining bout unsecure strcpy.

can an RPM spec file "include" other files?

Is there a kind of "include" directive in RPM spec? I couldn't find an answer by googling.
Motivation: I have a RPM spec template which the build process modifies with the version, revision and other build-specific data. This is done by sed currently. I think it would be cleaner if the spec would #include a build-specific definitions file, which would be generated by the build process, so I don't need to search and replace in the spec.
If there is no include, is there an idiomatic way to do this (quite common, I believe) task?
Sufficiently recent versions of rpmbuild certainly do support %include:
%include common.inc
Unfortunately, they aren't very smart about it -- there is no known set of directories, in which it will look for the requested files, for example. But it is there and variables are expanded, for example:
%include %{_topdir}/Common/common.inc
RPM does not support includes.
I have solved similar problems with either m4 macro processor or by just concatenating parts of spec (when the "include" was at the beginning).
If you only need to pass a few variables at build time, and not include several lines from another file, you can run
rpmbuild --define 'myvar SOMEVALUE' -bb myspec.spec
and you can use %myvar in the spec.
I faced this same issue recently. I wanted to define multiple sub-packages that were similar, but each varied just slightly (they were language-specific RPMs). I didn't want to repeat the same boiler-plate stuff for each sub-package.
Here's a generic version of what I did:
%define foo_spec() %{expand:%(cat '%{myloc}/main-foo.spec')}
%{foo_spec bar}
%{foo_spec baz}
%{foo_spec qux}
The use of %{expand} ensures that %(cat) is only executed a single time, when the macro is defined. The content of the main-foo.spec file is then three times, and each time %1 in the main-foo.spec file expands to each of bar, baz and qux, in turn, allowing me to treat it as a template. You could easily expand this to more than one parameter, if you have the need (I did not).
For the underlying issue, there maybe two additional solutions that are present in all rpm versions that I am aware of.
Subpackages
macro and rpmrc files.
Subpackages
Another alternative (and perhaps the "RPM way") is to use sub-packages. Maximum RPM also has information and examples of subpackages.
I think the question is trying to structure something like,
two spec files; say rpm_debug.spec and rpm_production.spec
both use %include common.spec
debug and production could also be client and server, etc. For the examples of redefining a variable, each subpackage can have it's own list of variables.
Limitations
The main advantage of subpackages is that only one build takes place; This may also be a disadvantage. The debug and production example may highlight this. That can be worked around using strip to create variants or compiling twice with different output; perhaps using VPATH with Gnu Make). Having to compile large packages and then only have simple variations, like with/without developer info, like headers, static libraries, etc. can make you appreciate this approach.
Macros and Rpmrc
Subpackages don't solve the problem of structural defines that you wish for an entire rootfs hierarchy, or larger collection of RPMs. We have rpmbuild --showrc for this. You can have a large amount of variables and macros defined by altering rpmrc and macros when you run rpm and rpmbuild. From the man page,
rpmrc Configuration
/usr/lib/rpm/rpmrc
/usr/lib/rpm/redhat/rpmrc
/etc/rpmrc
~/.rpmrc
Macro Configuration
/usr/lib/rpm/macros
/usr/lib/rpm/redhat/macros
/etc/rpm/macros
~/.rpmmacros
I think these two features can solve all the problems that %include can. However, %include is a familiar concept and was probably added to make rpm more full-featured and developer friendly.
Which version are you talking about? I currently have %include filename.txt in my spec file and it seems to work just like the C #include directive.
> rpmbuild --version
RPM version 4.8.1
You can include the *.inc files from the SOURCES directory (%_sourcedir):
Source1: common.inc
%include %{SOURCE1}
In this way they will go automatically into SRPMS.
I've used scripts (name your favorite) to take a template and create the spec file from that. Also, the %files tag can import a file that is created by another process, e.g. Python's bdist-rpm.

On GNU/Linux systems, Where should I load application data from?

In this instance I'm using c with autoconf, but the question applies elsewhere.
I have a glade xml file that is needed at runtime, and I have to tell the application where it is. I'm using autoconf to define a variable in my code that points to the "specified prefix directory"/app-name/glade. But that only begins to work once the application is installed. What if I want to run the program before that point? Is there a standard way to determine what paths should be checked for application data?
Thanks
Thanks for the responses. To clarify, I don't need to know where the app data is installed (eg by searching in /usr,usr/local,etc etc), the configure script does that. The problem was more determining whether the app has been installed yet. I guess I'll just check in install location first, and if not then in "./src/foo.glade".
I dont think there's any standard way on how to locate such data.
I'd personally do it in a way that i'd have a list of paths and i'd locate if i can find the file from anyone of those and the list should containt the DATADIR+APPNAME defined from autoconf and CURRENTDIRECTORY+POSSIBLE_PREFIX where prefix might be some folder from your build root.
But in any case, dont forget to use those defines from autoconf for your data files, those make your software easier to package (like deb/rpm)
There is no prescription how this should be done in general, but Debian packagers usually installs the application data somewhere in /usr/share, /usr/lib, et cetera. They may also patch the software to make it read from appropriate locations. You can see the Debian policy for more information.
I can however say a few words how I do it. First, I don't expect to find the file in a single directory; I first create a list of directories that I iterate through in my wrapper around fopen(). This is the order in which I believe the file reading should be done:
current directory (obviously)
~/.program-name
$(datadir)/program-name
$(datadir) is a variable you can use in Makefile.am. Example:
AM_CPPFLAGS = $(ASSERT_FLAGS) $(DEBUG_FLAGS) $(SDLGFX_FLAGS) $(OPENGL_FLAGS) -DDESTDIRS=\"$(prefix):$(datadir)/:$(datadir)/program-name/\"
This of course depends on your output from configure and how your configure.ac looks like.
So, just make a wrapper that will iterate through the locations and get the data from those dirs. Something like a PATH variable, except you implement the iteration.
After writing this post, I noticed I need to clean up our implementation in this project, but it can serve as a nice start. Take a look at our Makefile.am for using $(datadir) and our util.cpp and util.h for a simple wrapper (yatc_fopen()). We also have yatc_find_file() in case some third-party library is doing the fopen()ing, such as SDL_image or libxml2.
If the program is installed globally:
/usr/share/app-name/glade.xml
If you want the program to work without being installed (i.e. just extract a tarball), put it in the program's directory.
I don't think there is a standard way of placing files. I build it into the program, and I don't limit it to one location.
It depends on how much customising of the config file is going to be required.
I start by constructing a list of default directories and work through them until I find an instance of glade.xml and stop looking, or not find it and exit with an error. Good candidates for the default list are /etc, /usr/share/app-name, /usr/local/etc.
If the file is designed to be customizable, before I look through the default directories, I have a list of user files and paths and work through them. If it doesn't find one of the user versions, then I look in the list of default directories. Good candidates for the user config files are ~/.glade.xml or ~/.app-name/glade.xml or ~/.app-name/.glade.xml.

Resources