A phony target for sending a file via e-mail? - haskell

I would like to execute a command that takes a specific file in the project (building it as necessary) and sends it somewhere externally. For example, it may be a command that uploads a web page, or sends an e-mail. It may even write some additional files, such as a log, but that is not the point of calling it.
In other words, this action is evoked by naming its source, rather than the target — either because there is no tangible target, or it is insignificant and the action is primarily wanted for its side effect.
I see it that an extra command line argument will have to be provided, like this:
% BuildSystem send pizza.box
The command above should be equivalent to the following:
% BuildSystem pizza.box
% send pizza.box
Can (and "should") this be performed with the shake build system?
P.S. As Daniel suggests in an answer, I can extend shake's argument parser. But I am not sure if this is the best practice for a case like this. It seems a little at odds with the way shake behaves, treating every single command line argument as a self-contained goal. It is also a whole new logic for the operator to design, so much overhead for such a menial task.
It might be more intuitive to request a receipt file for each box that is sent. For instance:
% BuildSystem pizza.receipt
— would then be equivalent to:
% BuildSystem pizza.box
% send pizza.box >pizza.receipt
On the downside, as I understand from an official answer to a question nearby,we cannot have a pseudo-target like pizza.send that does not actually result in a file pizza.send being created. So I am not sure, again, if this is the right way.
P.S. 2 It would be even better if we could replace the default "file exists" success verifier with a custom code. For instance, instead of verifying that a file pizza.receipt (that we otherwise have no need for) was indeed created, we may telephone the customer and ask them if they enjoyed the lunch. If we can arrange that, we can then invoke the corresponding rule with a "pseudo-file" target pizza.send. In general, build artifacts need not reside on the local file system at all, insofar as the code that can verify and retrieve them is given.

The answer depends on whether you want to send the email once, or repeatedly, even if pizza.box doesn't change. In both cases let's imagine you want to write BuildSystem send.pizza.box to cause the email to be sent.
Send every time
If you want to send a copy of pizza.box every time you can use a phony rule:
phony "send.pizza.box" $ do
need ["pizza.box"]
cmd_ "send-email" "pizza.box"
If you want that to work for all files, you can generalise to:
phonys $ \s -> case stripPrefix "send." s of
Nothing -> Nothing
Just file -> Just $ do
need [file]
cmd_ "send-email" [file]
Send once
If you only want one copy of each changed pizza.box to be sent then you need to record evidence of that locally, to stop sending successive copies. The easiest way to do that is actually create a file send.pizza.box:
"send.*" %> \out -> do
let src = drop 5 out
need [src]
cmd_ "send-email" [src]
writeFile' out ""
If you strongly want to avoid writing the send.pizza.box file you can use the phony technique from above combined with addOracle (but it's unlikely to be worth the additional hassle).

Yes, shake supports this via phony.

Related

How to print function call timeline?

I would like to print location and function name of each function during a run.
It would help during debug to identify which function is called when multiple functions have the same name in different places.
It is possible but time costly to add by hand message such as:
println!("(function_name) file = {}, line = {}",file!(),line!());
Do you know such a solution? Have you suggestions to identify easily which function is called and who calls it?
The easiest option would probably be to use something like dtrace or ebpf: hook onto the "function entry" probe (not sure what it's called in linux / ebpf land but I'd think it exists) and just print the relevant information. You may want to add stack-based indentation though, and of course because of compiler optimisations you might have functions going missing. And you might get the mangled names which is not great, but de-mangling is a thing.
You might be able to do something similar by running your program in gdb and creating some sort of programmatic breakpoints which print and immediately continue?
Alternatively, a module-level attribute procedural macro could work: if you get a token stream for the entire module, it might be possible to automatically inject logging data into every function header.

PTC Integrity batch update member revision

Is there a way to update the member revision of a big list of files via command line?
I can't use :working or :head but have to specify a different revision for each file.
As far as I know --selectionFile only takes paths as input, but not the revision numbers.
edit: I wanted to set member a very big list of files and I wanted to avoid writing the command si updaterevision ... for every file, as it takes ages to complete for that many files. Instead I wanted to know if there is a more advanced method to specify a list of files and their revisions to be able to run the updaterevision only once (like it is with :working) for the whole list of files.
But as it is said in the comment there is no such possibility.
edit2: I use MKS for a couple of years now and as I now know, there is no such possibility (at least up to MKS 11.6) to update many files to different revisions with one single command line call. But using one call per member, as was proposed, made the whole operation take up to several hours as I had many thousands of members in the sandbox and MKS needs some time to complete each sicommand.
Some time already passed since you asked for this question, here is my comment in case it could still be useful for you in the future.
First, It is not completely clear what you want to achieve. Please be more descriptive and if possible provide example.
What I understand as of now is you need to set bunch of files listed as member revision thru the command line. This is fairly simple, the most complicated is actually to have the list of files to be updated to member and the revision that you want to set as member.
I recommend you to create a batch file with the commands to make each file member. You can use Regex to do it very quick and without much trouble.
Here is an example for updating one file member revision:
si updaterevision --hostname=servername --port=portnumber --user=username --changepackageid=5873763:2 --revision=:working myfile_a1.c
where
servername = the name of the server where your sandbox is located
portnumber = the port that provides access to the server for your sandbox
username = your login user id
changepackageid = here you change the number to use your defined TASK:ChangePackage for this changes
revision = if you have a working revision that you want now to become member, just use "working" as revision, otherwise you can define specific revision number, e.g. revision=1.2
At the end you define the name of the file you want to update.
Go to you sandbox root folder, open CMD window, and run the batch file. It will execute each line applying your changes.
If you have a list of files with the revision you want as member, you can use REGEX to convert it into a batch file.
Example list of files in text file:
file1.c 1.10
file3.c 1.19
sec_file1.c 1.1.2.1
support.h 1.7
Use notepad++ or other text editor with regex support and run this search:
Once you know which regex apply, you can now use it in the notepad++ to do a simple search and replace:
Search = ([\w].[\D])\s+([\d.]+).*
Replace = si updaterevision --hostname=servername --port=portnum --user=userid --changepackageid=6123933:4 --revision=\2 \1
\1 => FileName
\2 => File revision
See image below as example:
Finally just save doc as batch file and run it.
Just speculating that if you have a large list of members along with the member revision you want to update to, then you also have an sandbox that served you to generate this list.
If so my approach would be
c:\MySandbox> si updaterevision --recurse --revision=:working
If your member/revision list come from a development path you could first have a sandbox targeting that devpath, resync, (close thesandbox if opened in gui), retarget the sandbox to the destination devpath (or mainline) you want and then issue the command above.
For an single member approach I would use 'si rlog' to generate a list of si-commands directly
si rlog -R --noheaderformat --notrailerformat --revision=:working --format="si updaterevision {membername} --revision={revision}\r\n" > updaterevs.bat.txt
Review updaterevs.bat.txt rename it to updaterevs.bat and ecxecute it.
(Be careful if using it on other sandboxes)
Other interesting readings here might be the "snapshot sandbox" feature,
checkpointing in general and variants rsp. devpaths.
Using only these features might be politically more correct in the philosophy of Integrity.

How to define a Shake rule that depends on some modification of an input

Using Shake, I want to define a rule that depends on a bunch of executables (e.g. exe, dll, etc.). However, before using them, they need to be digitally signed. Since "signing" doesn't actually produce a new file, how can I ensure that my rule depends only on files after they have been signed?
Edit: Disambiguation is in order. My rules generate some of these files, but not all of them. Some are third-party and are part of our repository. So somehow I need to depend on those static files after they have been signed.
One simple approach is to make the signing step do a copy first, so:
"signed//*" *> \out -> do
copyFile ("unsigned" </> dropDirectory1 out) out
cmd "signer" out
Another is to generate a file when signing that serves as a reminder the file has been signed:
"programs//*.key" *> \out -> do
writeFile' out ""
cmd "signer" $ dropExtension out
The second formulation is discouraged since generally rules shouldn't modify the value produced by other rules - it's easy to get excessive rebuilds (I think you probably would here).
If you extra files, for exes you generate, just generate and sign them in one step - that's a fairly common pattern if you are generating a file and then editing its properties (editbin on Windows). For files you don't generate, you could sign them before you check them in, but modifying source files in the repo (which is what the 3rd party stuff looks like) is probably a bad idea anyway, so a copy is probably better.

Is it possible to break out of a restricted (custom) shell?

Not sure if this is the right place to ask.
Say I write a shell that takes stdin input, filters this input so let's say only certain commands like
ls (list contents of binary directory and subdirectory)
update (git clone)
build (go build)
test (go test)
start (systemctl start this.service only)
stop (systemctl stop this.service only)
running (is the binary being executed and with how many GOMAXPROCS?)
usage (memory, cpu usage)
gensvc (generate .service file)
exit (leave shell/logout)
work, you guessed it, I'm trying to give a user only very limited maintenance access over ssh.
Say I'm careful with \0 (I'd write it in Go anyway using bufio.Scanner)
Is there any way to stop the running shell and execute /bin/sh or similar or any way to get around this shell?
The idea is a user should push their stuff via git to a bare repo, this repo is cloned to the filesystem to a certain directory, then go build is called and the binary is ran with a systemd .service file that is generated previously.
Thinking logically, if the user is only able to write certain strings that are accepted, no there is no way. But maybe you know of one, some ctrl+z witchcraft ;) or whatever.
The only attack surface is the input string or rather bytes. Of course the user could git push a program that builds its own shell or runs certain commands, but that's out of scope (I would remove capabilities with systemd and restrict device access and forbid anything but the connection to the database server, private tmp and all, namespace and subnamespace it TODO)
The only problem I see is git pushing but I'm sure I could work around that in a git only mode argv and adding it to ~/.ssh/authorized_keys. something like lish gitmode and execute stdin commands if they start with git or something like it.
Example:
https://gist.github.com/dalu/ce2ef43a2ef5c390a819
If you're only allowed certain commands, your "shell" will read the command, parse it and then execute it then you should be fine, unless I misunderstood it.
Go "memory" can't be executed, not without you doing some nasty hacks with assembly anyway, so you don't have to worry about shell injection.
Something along these lines should be safe:
func getAction() (name string, args []string) {
// read stdin to get the command of the user
}
func doAction() {
for {
action, args := getAction()
switch action {
case "update": //let's assume the full command is: update https://repo/path.git
if len(args) != 1 {
//error
}
out, err := exec.Command("/usr/bin/git", "clone", "--recursive", args[0]).CombinedOutput()
// do stuff with out and err
}
}
}
If you are implementing the shell yourself and directly executing the commands via exec() or implementing them internally, then it is certainly possible to produce a secure restricted shell. If you are just superficially checking a command line before passing it on to a real shell then there will probably be edge cases you might not expect.
With that said, I'd be a bit concerned about the test command you've listed. Is it intended to run the test suite of a Go package the user uploads? If so, I wouldn't even try to exploit the restricted shell if I was an attacker: I'd simply upload a package with tests that perform the actions I want. The same could be said for build/start.
Have it reviewed by a pentesting team.
People can be very creative when breaking out a sandbox of any type. Only if you never accept the user's input you can consider yourself rather safe on premises (but here any command is an input) - paper security assumptions are considered a weak to assess the software. They are similar to 'no-bug' assumptions for an algorithm on paper: as soon as you implement it, 99% of time a bug raises

Guard and Cucumber: when I edit a step definition I'd like to only run features that implement this step

I have read the topic Guardfile for running single cucumber feature in subdirectory?, and this works great: when I change a feature, only this will be run by guard.
But in the other direction it doesn't work: when I edit any step definition file, always all features are run, whether they are using any of the steps in the step definition file, or not.
This is not nice. I'd like to have at least only those features to be run which use any of the steps in the edited file; but even better would be if guard could see which step currently is edited, and then only runs the features that use this specific step.
The first shouldn't be that hard to accomplish, I guess; the second rather seems wishfu thinking...
To master Guard and have the perfect setup for your projects and own needs, you have to change the Guardfile and configure your watchers accordingly. The templates that comes with each Guard plugin try to match the most useful behavior for most users, which might differ from your personal preferences.
Each Guard plugin starts with the guard DSL method, followed by an options hash to configure the Guard plugin. The options are often different for different Guard plugins and you have to consult the plugin README for more information.
In between the guard block do ... end you normally configure your watchers. A watcher must be defined with a RegExp, which describe the files to be watched. I use Rubular to test my watchers and you can paste your current features copied from the output from find features to have real files to test your RegExp.
The line
watch(%r{features/.+\.feature})
for example watches for all files in the features folder that ends with .feature. Since there is no block provided to the watcher, the matched file is passed unmodified to Guard::Cucumber for running.
The watcher
watch(%r{features/support/.+}) { 'features' }
matches all files in the features/support directory and because the block always returns features, every time a file within the support directory changes, features is passed to Guard::Cucumber and thus all features are exectued.
The last line
watch(%r{features/step_definitions/(.+)_steps\.rb}) do |m|
Dir[File.join("**/#{m[1]}.feature")][0] || 'features'
end
watches for every file that ends with _steps.rb in the features/step_definitions dierctory and tries to match a feature for the step definition. Please notice the parenthesis in the RegExp features/step_definitions/(.+)_steps\.rb. This defines a match group, that is available later in your watcher block. For example, a step definition features/step_definitions/user_steps.rb will match and the first match group (m[1]) will contain the value user.
Now we try to find a matching file in all subdirectories (**) that is named user.feature. If this is the case then run the first matching file ([0]) or if you do not find anything, then run all features.
So it looks like you've named your steps different from what the default Guard::Cucucmber Guardfile is expecting, which is totally fine. Just change the watcher to match your naming convention.

Resources