Perforce Skip File Download Based on Extension [duplicate] - perforce

I am using the command line tools of Perforce.
And with the p4 tool, I want to sync to a stream.
The problem is that I want to exclude a directory from my workspace.
If my work space was defined as a view, this would be possible with something like:
View:
//depot/Foo/... //Client/Foo/...
-//depot/Foo...bin/... //Client/Foo...bin/...
But the Stream: setting overrides the View: specification.
I have set the Stream using the command:
$ p4 client
I also looked into P4IGNORE mechanism, but this is of no use to my problem, as P4IGNORE will not influence the sync command.
I tried adding a Paths: command to my client spec, but that is not recognized in this context. I think it is used when specifying the stream, as done by the administrator of the depot?

This is exactly what virtual streams are for. :) Create a child of the stream with type virtual, and exclude the directory from the virtual child stream (via either Paths or Ignore in the stream spec). This will generate you a client view that is the desired subset of the parent stream.
https://www.perforce.com/blog/virtual-streams-windows-big-projects

Related

Get only one folder from a stream depot in Perforce

I am new to Perforce "stream depots" and stuck at step 1. I am trying to create a workspace that includes only one folder from a huge stream depot.
Every time I try, it seems to want to download the entire stream, which is a huge amount of files. I can see now way to limit it to just one folder.
This is what virtual streams are for.
Create a new stream that is a child of the stream in question, with type "virtual", and specify its Paths as the single folder, e.g.:
Stream: //streams/just_the_facts
Parent: //streams/main
Type: virtual
Paths:
share path/to/the/facts/...
Now switch to that stream:
p4 switch just_the_facts
and you only have the files from the facts folder.
What I ended up doing was:
p4 set P4CLIENT=MyWorkspace then
p4 client -s -S //My/Stream to create the workspace.
Then p4 sync //My/Stream/just/the/dir/I/want...

Retrieving projects without binary files

We have multiple very large perforce repositories with many binary files mixed in with source code. Is it possible to configure a client spec so that certain file extensions (e.g. zip, jar, war, etc.) are never synced to the local copy?
We are syncing periodically to generate metrics on source code, and for this purpose the binary files just get in the way.
In your client spec, use the "exclusionary" mappings, with a minus sign, as so:
View:
//depot/project/... //client-name/project/...
-//depot/project/.../*.zip //client-name/project/.../*.zip
-//depot/project/.../*.jar //client-name/project/.../*.jar
-//depot/project/.../*.war //client-name/project/.../*.war

Adding files to sourcecontrol on linux using cleartool

I have a file that i want to add to sourcecontrol on linux using cleartool .
I've followed the IBM documentation for this, i've tried this:
cleartool mkelem testScript.sh
I got an error: Can't modify directory "." because it is not checked out.
I also would like to know how can i checkout/checkin files or directories and setting activities.
You need to checkout the parent folder first.
cd /path/to/file/
cleartool mkact newfile
cleartool checkout -c "add file" .
cleartool mkelem testScript.sh
cleartool checkin -nc
The cleartool mkact would work if you are in an UCM view.
It will create and set a new activity, which will record the files and folder you will modify.
Here, the new activity newFile will record the new version of the parent folder, as well as the version 0 and 1 of the file.
You should create separate questions for .. separate questions...
Going back to the original - the reason why it isn't working is, as VonC has pointed out, you haven't checked out the parent of the file. Remember, when you run "cleartool mkelem", you are about to modify the contents of the parent directory (. in this case) by adding a new "pointer" to the element you're now creating. As with everything else in clearcase, when you want to modify the contents of an element, you have to check it out first.
One of ClearCase's greatest strength (and hardest to wrap one's head around) is the concept of an "element", IMO. "Everything" behaves similarly with an element. Making any change to an "element" (file or directory) means you have to check it out first to make that change.
In the case of a file, that's easy to grasp - you're just editing lines in a file. For a directory, it's almost as easy - you can think of a directory as just a list of pointers to data blobs. We make the name of the blob something convenient we can remember (like foo.java or myapplication.cc or README.md). But we can also change the name of the pointer (even though it points to the same data blob) by renaming a file. We can remove the pointer to the blob without impacting the blob itself by using "rmname". That's essentially what "rmname" does.
In ClearCases' case, the mkelem command is a little bit special - it creates the initial datablob, and adds a pointer to that datablob in the current directory (kind of does 2 things at once).

In Puppet using Hiera, where do I put the files I want to have installed on nodes?

I know puppet modules always have a files directory and I know where it's supposed to be and I have used the source => syntax effectively from my own, handwritten modules but now I need to learn how to deploy files using Hiera.
I'm starting with the saz-sudo module and I've read the docs but I can't see anything about where to put the sudoers file; the one I want to distribute.
I'm not sure whether I need to set up a site-wide files dir in /etc/puppetlabs/puppet and then make subdirs in there for every module or what. And does Hiera know to look in /etc/puppetlabs/puppet/files/sudo if I say, source => "puppet:///files/etc/sudoers" ? Do I need to add a pathname in /etc/hiera.yaml? Add a line - files ?
Thanks for any clues.
My cursory view of the puppet module, given their example of using hiera:
sudo::configs:
'web':
'source' : 'puppet:///files/etc/sudoers.d/web'
'admins':
'content' : "%admins ALL=(ALL) NOPASSWD: ALL"
'priority' : 10
'joe':
'priority' : 60
'source' : 'puppet:///files/etc/sudoers.d/users/joe'
Suggest it assumes you have a "files" puppet module. So under you puppet modules section:
mkdir -p files/files/etc/sudoers.d/
Drop your files in there.
Explanation:
The url 'puppet:///files/etc/sudoers.d/users/joe' is broken down thus:
puppet: protocol
///: Three slashes indicate the source of the file is in a module.
files: name of the module
etc/sudoers.d/users/joe: full path to the file within the module's "files" directory.
You don't.
The idea of a module (Hiera backed or not) is to lift the need to manage the whole sudoers file from you. Instead, you can manage each single entry in the sudoers file.
I recommend reviewing the documentation carefully. You should definitely not have a file { "/etc/sudoers": } resource in your manifest.
Hiera doesn't have to do anything with Files.
Hiera is like a Variables Database, and servers you based on the hierarchy you have.
the files inside puppet, are usually accessed in methods like source => but also these files are using some basic structure.
In most cases when you call an file or template.
A template can serve your needs to automatically build an sudoers based on that.
There are also modules that supports modifying sudoers too.
It is up to you what to do.
In this case, saz stores the location of the file in hiera, but the real location can be a file inside your puppet (like a module file or something similar).
Which is completely unrelated.
Read about puppet file server
If you have questions, just ask.
V

How to call a bash script automatically when directory contents chage

My goal is to run a bash script automatically whenever any new file is added to a particular directory or any subdirectory of that particular directory.
Detail Scenario:
I am creating an automated process for file submission from teachers to students and vice versa. Sender will upload file and it will be stored inside the Uploads directory in the LAMP server in the format, ex. "name_course-name_filename.pdf". I want some method so that when any file stored inside the Uploads folder, the same time a script will be called and send that file to the list of receives.
From the database I can find the list of receiver for that particular course and student.
The only concern of mine is, how to call a script automatically and make it work on individual file whenever the content of the directory changes. Cron will do in intervals but not a real time work.
Linux provides a nice mechanism for that purpose which is called inotify. inotify is mostly available as a C API. But there have been developed shell utilities as well. You should use inotifywait from inotifytools (pkg name in debian) for this. Here comes a basic example:
#!/bin/bash
directory="/tmp" # or whatever you are interested in
inotifywait -m -e create "$directory" |
while read folder eventlist eventfile
do
echo "the following events happened in folder $folder:"
echo "$eventlist $eventfile"
done
Update:
If the problem goes complicated, for example you'll have to monitor recursive, dynamic directory structures, you should have a look at incron It's a cron like daemon which executes scripts on certain events. But the events are file system events rather than timer events.
There is another option to 'inotifywait':
-d --daemon
Same as --monitor, except run in the background logging events to a file
that must be specified by --outfile. Implies --syslog.
For completeness:
-m --monitor
Instead of exiting after receiving a single event, execute indefinitely.
The default behaviour is to exit after the first event occurs.
Within the do-done block of your 'while' statement, you might parse each event report for interesting details then use 'case-esac' to take action based on each event that you care about.
For something that you plan to rely on for your operations, you might also consider replacing the hard-coded '$directory' with some sort of configuration file. Such a file might include the path and filename, the interesting events for that path and file, and a script to run when those events happened.
The script might take the list of events as parameters and then 'case-esac' again.
Just one man's ramblins,
~~~ 8d;-Dan

Resources