I'm trying to discover how to change the default set of Client Spec options and submit-options.
set P4CLIENT=my_new_client_1
p4 client
Gives me the following spec default-spec:
Client: my_new_client_1
...
Options: noallwrite noclobber nocompress unlocked nomodtime normdir
SubmitOptions: submitunchanged
...
Now on my machine i want to always use revertunchanged, rmdir for example, but it seems like I need remember to manually set this everytime I create a new client.
Is there any way to achieve this? p4 set seems to only affect the things that can be set by environment variables.
You can't change the default client spec template (unless you're the Perforce system administrator) but you can set up and use your own template. You would first create a dummy client with a client spec that has the values that you want:
Client: my_template_client
...
Options: noallwrite noclobber nocompress unlocked nomodtime rmdir
SubmitOptions: revertunchanged
...
Then you just specify that the dummy client should be used as a template when creating new clients:
p4 client -t my_template_client my_new_client_1
The first response here was incorrect:
You CAN create a default clientspec in Perforce using triggers.
Essentially, you create a script that runs on the server and runs whenever someone does a form-out on the form client. This script would have to check to see if the clientspec already exists, and then substitute a sensible "default" if it doesn't (if it's a new clientspec).
Note that this works fine and well, and it's even in the P4 SysAdmin Guide (the exact example you're looking for is there!) but it can be a bit difficult to debug, as triggers run on the SERVER, not on the client!
Manual:
http://www.perforce.com/perforce/r10.1/manuals/p4sag/06_scripting.html
Specific Case Example:
http://www.perforce.com/perforce/r10.1/manuals/p4sag/06_scripting.html#1057213
The Perforce Server Deployment Package (SDP), a reference implementation with best practices for operating a Perforce Helix Core server, includes sample triggers for exactly this purpose. See:
SetWsOptions.py - https://swarm.workshop.perforce.com/projects/perforce-software-sdp/files/main/Server/Unix/p4/common/bin/triggers/SetWsOptions.py
SetWsOptionsAndView.py - https://swarm.workshop.perforce.com/projects/perforce-software-sdp/files/main/Server/Unix/p4/common/bin/triggers/SetWsOptionsAndView.py
Using the p4 client -t <template_client> is useful and is something a regular user can do, and has a P4V (graphical user interface) equivalent as well. Only Perforce super users can mess with triggers.
There is one other trick for a super user to be aware of: They can designate a client spec to be used as a default if the user doesn't specify one with -t <template_client>. That can be done by setting the configurable template.client. See: https://www.perforce.com/manuals/cmdref/Content/CmdRef/configurables.configurables.html#template.client
One other suggestion: I suggest changing the default from submitunchanged to leaveunchanged rather than revertunchanged (as in the sample triggers above). Using leaveunchanged is better because, if you still want the file checked out, using leaveunchanged rather than revertunchanged saves you from having to navigate to the file to check it out again. It's a small thing, but optimal to go with leaveunchanged. If you do want to revert the unmodified file, it's slightly easier to revert than to checkout again, which might require more navigating or typing.
Related
I have a workspace "template" that get some file in local.
I can use gui to create a new workspace, right-click, get revision..., select "template".
Then, the new workspace's local file = template's local file. The file's version is also equal.
How can i use command line do the same thing?
The equivalent of the "get revision" operation you're talking about is to use the client spec template as the revision specifier on a p4 sync:
p4 sync FILE#template
This is commonly used to recreate the state of another client for purposes of recreating a build. The client essentially acts like a label for whatever revisions it has synced. To sync the entire workspace instead of that one specific FILE, simply use p4 sync #template.
Note that this is completely separate from the concept of using the client spec as a "template" for the View of a new client:
p4 client -t template
This will create a new client spec that copies its View from the template client, but the specific set of revisions it's synced to (commonly called the "have list", i.e. the set of revisions referenced by #have within that client and by #client from any other context) is not in any way bound to the template client (unless a ChangeView is used, but that's a whole other thing).
Since these are separate operations it is not necessary to do one in order to do the other.
I have a number of microservices I want to monitor for uptime. I would like to make a call to each microservice to evaluate its state. If the call succeeds, I know the application is "UP".
For an overly simplified use case, say I have the following three calls below. I want to make a call to each of them every 10 minutes. If all three respond with a 200, I want to modify an HTML file with the word "UP", otherwise the file should have the word "DOWN".
GET /api/movies/$movieId
POST /api/movies
DELETE api/movies/$movieId
Is Express/Node.js a good framework for this lightweight app? If so, can someone point me to a GitHub stub that can get me started? Thanks!
Both Express and Restify would be fine for this sort of example if they're simply API's. The clincher would be your note about returning HTML.
I want to modify an HTML file with the word "UP", otherwise the file should have the word "DOWN".
This would be more appropriate for Express as it allows you to use libraries like handlebars, mustache, pug, etc to do this HTML transformation.
You can use a scheduled job to check the status of your three applications, store that latest status check somewhere (a database, flat file, etc). Then a request to an endpoint such as /status on this new service would look up the latest status check, and return some templated HTML (using something like handlebars).
Alternatively, if you're comfortable with a bit of Bash you could probably just use linux / unix tooling to do this if you don't care about up-time history or further complexities.
You could setup apache or nginx to serve a file on the /status endpoint. Then use a cron job to ping all your health check URL's. If they all return without errors, you can update the file being served by nginx to say "UP", and if any errors are returned change the text to "DOWN".
This unix approach can also be done on windows if that's your jam. It would be about as light weight as you can get, and very easy to deploy and correct, but if you want to expand this application significantly in the future (storing up time history for example) you may wish to fall back to Express.
Framework? You kids are spoilt. Back when I was a lad all this round here used to be fields...
Create two html template files for up and down, make them as fancy as you want.
Then you just need a few lines of bash run every 10 minutes as a cron job. As a basic example, create statuspage.sh:
#!/bin/bash
for http in GET POST DELETE
do
res=$(curl -s -o /dev/null -w "%{http_code}" -X $http https://$1)
if [ $res -ne 200 ]
then
cp /path/to/template/down.html /var/www/html/statuspage.html
exit $res
fi
done
cp /path/to/template/up.html /var/www/html/statuspage.html
Make it executable chmod +x statuspage.sh and run like this ./statuspage.sh "www.example.com/api"
3 curl requests, stopping as soon as one fails then copying the up or down template to the location of your status page as applicable.
I am using int-sftp:inbound-channel-adapter to download valid files from sftp server to local dir. I need to delete local file if that file is rejected by my custom filter. Is this something achieved via config or need to implement code? If so is there any sample out there?
You would need to do the deletion in your custom filter. Use File.delete().
But, of course, it would be better to use a custom remote filter, instead of a custom local filter, to avoid fetching the invalid file (unless you need to look at the contents).
I have two scripts. I put them in the same namespace (the #namespace field).
I'd like them to interactive with another.
Specifically I want script A to set RunByDefault to 123. Have script B check if RunByDefault==123 or not and then have script A using a timeout or anything to call a function in script B.
How do I do this? I'd hate to merge the scripts.
The scripts cannot directly interact with each other and // #namespace is just to resolve script name conflicts. (That is, you can have 2 different scripts named "Link Remover", only if they have different namespaces.)
Separate scripts can swap information using:
Cookies -- works same-domain only
localStorage -- works same-domain only
Sending and receiving values via AJAX to a server that you control -- works cross-domain.
That's it.
Different running instances, of the same script, can swap information using GM_setValue() and GM_getValue(). This technique has the advantage of being cross-domain, easy, and invisible to the target web page(s).
See this working example of cross-tab communication in Tampermonkey.
On Chrome, and only Chrome, you might be able to use the non-standard FileSystem API to store data on a local file. But this would probably require the user to click for every transaction -- if it worked at all.
Another option is to write an extension (add-on) to act as a helper and do the file IO. You would interact with it via postMessage, usually.
In practice, I've never encountered a situation were it wasn't easier and cleaner to just merge any scripts that really need to share data.
Also, scripts cannot share code, but they can inject JS into the target page and both access that.
Finally, AFAICT, scripts always run sequentially, not in parallel. But you can control the execution order from the Manage User Scripts panel
I am working on code for a webserver.
I am trying to use webhooks to do the following tasks, after each push to the repository:
update the code on the webserver.
restart the server to make my changes take effect.
I know how to make the revision control run the webhook.
Regardless of the specifics of which revision control etc. I am using, I would like to know what is the standard way to create a listener to the POST call from the webhook in LINUX.
I am not completely clueless - I know how to make a HTTP server in python and I can make it run the appropriate bash commands, but that seems so cumbersome. Is there a more straightforward way?
Setup a script to receive the POST request ( a PHP script would be enough )
Save the request into database and mark the request as "not yet finished"
Run a crontab and check the database for "not yet finished" tasks, and do whatever you want with the information you saved into database.
This is definately not the best solution but it works.
You could use IronWorker, http://www.iron.io, to ssh in and perform your tasks on every commit. And to kick off the IronWorker task you can use it's webhook support. Here's a blog post that shows you how to use IronWorker's webhooks functionality and the post already has half of what you want (it starts a task based on a github commit): http://blog.iron.io/2012/04/one-webhook-to-rule-them-all-one-url.html