Including extra directories with Keter - haskell

I have a Yesod site and have created a handler for handling downloads and enforcing constraints. My Yesod project directory has a subdirectory called downloads, and it contains files I want the user to be able to download if they are logged in. The handler works great in my development and staging boxes, but breaks when I transfer to production. I tracked the problem down to yesod keter not archiving the files when it builds its bundle.
How do I convince keter to include the directory?

All the yesod keter command does is create a .tar.gz compressed archive file with the .keter extension containing the following subdirectories:
config: an exact copy of the identically named directory in your source tree
dist: contains a subdirectory bin containing your app's binary
static: an exact copy of the identically named directory in your source tree
Note that the path to your app's binary is set in config/keter.yml via the exec setting while the path to your static files is set via the root setting. The exact set of files included by the yesod keter command is specified in the findFiles function if you want to take a look at the source code.
If you want to customize the contents of your .keter file it is probably most straightforward to write a shell script to create the archive. With this script you can add arbitrary extra directories to the archive.
The bare minimum bash script you'd need to emulate the behaviour of yesod keter is as follows:
#!/bin/bash
tar cvf myapp.keter config/ dist/bin/ static/
You can customize this however you want to produce the correct content. Adding download/ to the end of this command line should do the trick.

Related

How to set RubyMine's default working directory to the executing .rb program's subdirectory?

In RubyMine, I have a project with many subfolders, each of which contains:
One or more standalone single-file executable Ruby programs (.rb files);
An input text file.
In older versions of RubyMine, when running one of the standalone executable programs (via Cmd+Shift+R on my Mac), the default folder in which RubyMine would look for the input file would be the same directory as the .rb file currently being executed -- which worked great.
The code used to read the file is something like:
data = File.readlines('input.txt')
However, after recently updating RubyMine to v2022.3.1, the behavior has changed, such that RubyMines now seems to be looking in the project's root directory for the file, instead of the same subdirectory as the .rb file currently being run. This produces the error:
in `readlines': No such file or directory # rb_sysopen - input.txt (Errno::ENOENT)
To correct this, I've been going into Run (menu) > Edit Configurations; and in the Edit Configurations dialog, in the configuration that RubyMine auto-created for the current executable file, changing the Working Directory value from the default of the project's root directory, to the subfolder of the current .rb file.
However, this above workaround is annoying, since I need to do it once each for every individual one of the many individual .rb executable files in my project.
My question: How can I configure my project and/or RubyMine itself to go back to the older behavior of defaulting a given .rb file to use its own directory as the default Working Directory, instead of the project's root directory?
(This question and/or its solution might also apply to other JetBrains IDEs such as IntelliJ, since they all seem to work similarly.)
The previous behaviour has been changed with https://youtrack.jetbrains.com/issue/RUBY-29236. So now yes, the logic is the following:
in case of no Ruby module, project's root will be used
in case of Rails, its home folder
otherwise the module's root
There is no option to change it in RubyMine but you can configure the configuration template using some variable there as Working directory.

How Does a .feature File get found by Cucumber When Cucumber is Run?

In cucumber, after a new project directory is made and a .feature file is created and saved in an editor, how does the File get found by Cucumber When Cucumber is Run? Is the file exported manually into Cucumber or the tool scans the whole system automatically mapping itself to the file?
By default Cucumber will load all files in the 'features' folder in the root directory (recursively).
If you want to use a different location you can run Cucumber with the command 'cucumber myfolder' which will look for features in a folder called myfolder in the project root.
It gets a bit more complicated when using subdirectories for features -> From this site (copied here for the record) - http://makandracards.com/makandra/4971-how-to-organize-and-execute-cucumber-features-e-g-in-subdirectories
By default, cucumber loads all *.rb files it can find (recursively) within the directory you passed as argument to cucumber.
$ cucumber # defaults to directory "features"
$ cucumber features
$ cucumber my/custom/features/dir
So, if you would like to organize features in subdirectories, you won't have any problems when running the whole test-suite. Cucumber will automatically load and run features in subdirectories, too.
However, running features in subdirectories does not work out of the box. The reason for this is that cucumber will look for your step definitions and support files within the subdirectory.
What you can do now is to either provide all needed support files and step definitions also within the subdirectories (not practicable) OR use the -r command line argument when running the subdirectory features:
cucumber -r features
In your testrunner class you can specify the path to where your feature files are locate.

git - "ignore" or avoid versioning subdirectories

I want to have all my configuration files versioned using Git in a remote repository at Github. I'm using Debian 7 testing, and all my configuration files are under the /home/user_name/ directory.
I created the usual .gitignore with all the files that I want to ignore and the files and directories that I want to allow versioning. My problem begins when I go to Documents, for example, and I see in zsh that folder is under the same versioning as the home directory.
I understand that Git works that way, but I need to know if it's possible to avoid that.
One classic way to version configuration files is to create a subdirectory like ~/etc/ and let your ~/.something files be symbolic links to ~/etc/something. Then, you can version ~/etc/ normally.
You can manage to ignore everything but your configuration files, but you'll always have little glitches like: the day you run git clean -fdx in the wrong place, you delete all your data.
Write */ in your .gitignore to ignore directories. Make exceptions with !foodir. Consider prefixing with slashes (see documentation for details).

steps needed to create binary package for distribution in linux

I am little confused on how to create a complete binary package using rpmbuild from a project I just created (already compiled binary).
my current project contain similar format as this user (Packaging proprietary software for Linux)
Where I have
foo (binary)
data
libs
foo.sh
libs will contain all the shared libraries the project requires, and foo.sh is a script that sets LD_LIBRARY_PATH to include libs. Therefore, the user will execute foo.sh and the program should start.
I am looking at the tutorial from this site (rpm tutorial)
I understand to create a rpm I create a build area use rpmdev-setuptree
I can create a spec file use cd ~/rpmbuild/SPECS; rpmdev-newspec foo and if I got a good SOURCES folder I can build it with rpmbuild -ba foo.spec
But I have no idea how to setup the SOURCES directory. The tutorial stated (here) that I should create a tarball and place all my source file in it and put in SOURCE directory. What would be the source file in my case?
You are trying to create a RPM from binary files you have already? In that case, you can just leave the whole building stuff out of the SPEC file, and you need a SOURCE directory to keep the bundles you've got, the %prep step described below will take them from here.
In a binary package I built a while back from zip files, I did:
Heading, with name, version, description written by me/cribbed from the originals
Sources: The original places to download the Linux packages, official documentation, ...
%prep: Just unpack the different pieces, delete some redundant files, ...
%build: Nothing to do
%install: Create the relevant directories under $RPM_BUILD_ROOT by hand, copy files there by install, copy/create configuration files, ...
%clean: Blow away $RPM_BUILD_ROOT
%files: An exhaustive list of all files installed.
This required a few iterations to get right. Afterwards I followed the upstream package by rebuilding my RPM (conveniently I had everything packaged up in a SRPM, where the Source part was kind of a misnomer...)

Multiple locations within a folder hierarchy to run SCons from

So far, I've only seen examples of running SCons in the same folder as the single SConstruct file resides. Let's say my project structure is like:
src/*.(cpp|h)
tools/mytool/*.(cpp|h)
What I'd like is to be able to run 'scons' at the root and also inside tools/mytool. The latter compiles only mytool. Is this possible with SCons?
I assume it involves creating another SConstruct file. I've made another one: tools/mytool/SConstruct
I made it contain only:
SConscript('../../SConstruct')
and I was thinking of doing Import('env mytoolTarget') and calling Default(mytoolTarget), but running it with just the above runs in the current directory instead of from the root, so the include paths are broken.
What's the correct way to do this?
You can use the -u option to do this. From any subdirectory, scons -u will search upwards in the directory tree for an SConstruct file.

Resources