I'm struggling how to setup streams for the following scenario:
I have a library project (//libX) using typical mainline, release and dev streams.
However, I want to have dev streams for separate products (//libX/projectA) that use this library. These products have different directory structures, and I want to map //libX/main/... to a subfolder //libX/projectA/extern/libX/....
For example, my lib is structured like this:
//libX/main
/bin
/src
/tests
readme.txt
And my projects are something else altogether, but use my lib
//libX/projectA
/documentation
/extern
/libX
/bin
/src
/tests
readme.txt
/MaxSDK
/source
/tools
config.xml
The closest I've had it to work was like this:
Paths: share ...
Remaps: ... extern/libX/...
But the remaps only seem to fix the file locations on the local machine. Using the above remap settings, the libX files end up at the same root as projectA files.
Can the above scenario work with streams or I should go back to branch specs?
Thanks!
Your project shouldn't be a child stream of your lib stream -- putting it in its own stream depot seems less confusing:
//libX/main
/bin
/src
/tests
readme.txt
//libX/projectA (child of //libX/main)
/bin
/src
/tests
readme.txt
//projectA/main
/documentation
/extern
/libX (mirror of //libX/projectA)
/bin
/src
/tests
readme.txt
/MaxSDK
/source
/tools
config.xml
And you'd get this structure by doing:
Stream: //projectA/main
Paths:
share ...
import extern/libX/... //libX/projectA/...
Unfortunately there are some limitations with this approach -- if your libX Paths aren't a trivial share ... then the import won't pick it up correctly since the import path depotPath syntax imports a depot path, not a stream path. With a normal import you also can't make changes to libX/projectA from within this stream -- you can use import+ to permit this, but I've seen enough problems with import+ that my inclination would be to make this my workflow when changing the library:
p4 switch //libX/projectA
(make changes)
p4 submit
p4 switch //projectA/main
although this assumes that the library is modular enough (with its own unit tests that cover your project's use case etc) that you can do that work in it independently.
Related
I am a user of a docker iamge on a server. On this server the caffe framework is installed on the "shared" side so that every docker image is able to access this framework without needing it to install themselves.
However, I want to make changes in one python script (e.g the draw_net.py) which is located in a python folder. I now want to copy the files to my own workspace and continue working with them, and only change the draw_net.py but make this file use the compiled data (otherwise I would have to compile it my own, which is a little painful). However since I do not own these files I am not able to change them, therefore I wanted to copy one file into my container and change it, but the dependencies must be still kept up, this is where I run into errors. I do not want to recompile anything on my image, I just want to change one python script a little.
The structure looks like the following:
bin/
dev/
home/
lib64/
mnt/
proc/
run/
srv/
tmp/
var/
boot/
etc/
lib/
media/
opt/
caffe/
build/
lib/
_caffe.so
data/
docker/
examples/
matlab/
python/
caffe/
__init__.py
__init__.pyc
pycaffe.py
pycaffe.pyc
draw_net.py
src/
cmake/
distribute/
docs/
include/
models/
scripts/
tools/
root/
workspace/
myfolder/
sbin/
sys/
usr/
Now I want the files from /opt/caffe/python/ , especially the draw_net.py in my own docker image are, so in /root/workspace/myfolder.
A simple cp command resulted in the following errors
cp: cannot create symbolic link 'cpcaffe/build' Operation not
supported
cp: cannot create symbolic link 'cpcaffe/python/caffe/_caffe.so': Operation not supported
Executing gives me the expected reseult that it cannot located the moduel _caffe , which is located in /opt/caffe/build/lib/_caffe.so
I tried to add the line
import os
os.chdir('/opt/caffe/python');
at the beginning of the script but this did not change anything.
I guess that I somehow to make a symbolic link by myself to the original version, but I am a little newbish around this topic and could need some pointers.
I have root access to all folders under root/ , all other folders are shared folders where I only have read rights.
I have a Yesod site and have created a handler for handling downloads and enforcing constraints. My Yesod project directory has a subdirectory called downloads, and it contains files I want the user to be able to download if they are logged in. The handler works great in my development and staging boxes, but breaks when I transfer to production. I tracked the problem down to yesod keter not archiving the files when it builds its bundle.
How do I convince keter to include the directory?
All the yesod keter command does is create a .tar.gz compressed archive file with the .keter extension containing the following subdirectories:
config: an exact copy of the identically named directory in your source tree
dist: contains a subdirectory bin containing your app's binary
static: an exact copy of the identically named directory in your source tree
Note that the path to your app's binary is set in config/keter.yml via the exec setting while the path to your static files is set via the root setting. The exact set of files included by the yesod keter command is specified in the findFiles function if you want to take a look at the source code.
If you want to customize the contents of your .keter file it is probably most straightforward to write a shell script to create the archive. With this script you can add arbitrary extra directories to the archive.
The bare minimum bash script you'd need to emulate the behaviour of yesod keter is as follows:
#!/bin/bash
tar cvf myapp.keter config/ dist/bin/ static/
You can customize this however you want to produce the correct content. Adding download/ to the end of this command line should do the trick.
I have an app with the following (I would have thought quite common) directory hierarchy:
/src
subdir1/ # Subdirs with more source files.
more.c
SConscript
foo.c # Source files.
foo.h
SConscript
/other # Other top-level directories with no source code.
/stuff # However, there are other assets I may want to build.
README # Other top-level files.
SConstruct
The problem is that when I run scons from the top-level directory, it calls gcc from that directory without cding into src, like this:
gcc -o src/foo.o src/foo.c
This is problematic for several reasons:
Within my program, I #include files giving the path relative to the src directory. For example, more.c could include foo.h with #include "foo.h". This fails because GCC is run from the parent directory. I don't want to change my includes to #include "src/foo.h".
I use the __FILE__ special macro for things like logging. When built from the top-level directory, GCC puts "src/" at the front of all filenames, since that was the path it was given to compile. This may seem picky, but I don't want that, because I think of my source tree as being relative to the src directory.
(Edit: I should add that obviously #1 can be fixed by adding -Isrc as a flag to GCC, but this seems like more hacks around the main issue.)
How can I make SCons cd into the src directory before calling gcc?
I don't want to get rid of the src directory and move everything up, because there are lots of other (non-code) files at the top level.
I don't want SCons to cd into every subdirectory. It should just cd into src and then build all files in the hierarchy from there.
I could solve this by moving SConscript inside the src directory and running it from there, perhaps using a Makefile at the top level. But this seems quite hacky, and I also do want to use SCons to build (non-code) assets in other directories than src.
I've read that you can make a custom Builder and make it change directories. However, I do not want to write a whole new Builder for C/C++. Is there a way to modify the behaviour of an existing builder without writing one from scratch? Also, on one forum, someone said that changing directories from within a Builder would break parallel builds since it would change the directory that other tasks are building from. Is this true?
This behavior is just how SCons works, and its not possible to avoid/change. I've been looking for some supporting documentation in its favor, and havent found any. Its just something Ive become used to.
As you mention, the include paths are simple to fix. The harder part is the __FILE__ macro. I hadnt ever noticed this until you mentioned it. Unfortunately, I think the only way around this is to strip the path in the logger, which is a rather ugly fix.
You can add src to your env['CPPPATH'] which will fix the include paths.
env.Append(CPPPATH=[Dir('src')])
However, this doesn't solve the problem with __FILE__.
So far, I've only seen examples of running SCons in the same folder as the single SConstruct file resides. Let's say my project structure is like:
src/*.(cpp|h)
tools/mytool/*.(cpp|h)
What I'd like is to be able to run 'scons' at the root and also inside tools/mytool. The latter compiles only mytool. Is this possible with SCons?
I assume it involves creating another SConstruct file. I've made another one: tools/mytool/SConstruct
I made it contain only:
SConscript('../../SConstruct')
and I was thinking of doing Import('env mytoolTarget') and calling Default(mytoolTarget), but running it with just the above runs in the current directory instead of from the root, so the include paths are broken.
What's the correct way to do this?
You can use the -u option to do this. From any subdirectory, scons -u will search upwards in the directory tree for an SConstruct file.
I am planning my directory structure for a linux/apache/php web project like this:
Only www.example.com/webroot/ will be exposed in apache
www.example.com/
webroot/
index.php
comp1/
comp2/
component/
comp1/
comp1.class.php
comp1.js
comp2/
comp2.class.php
comp2.css
lib/
lib1/
lib1.class.php
the component/ and lib/ directory will only be in the php path.
To make the css and js files visible in the webroot directory I am planning to use symlinks.
webroot/
index.php
comp1/
comp1.js (symlinked)
comp2/
comp2.css (symlinked)
I tried following these principles:
layout by components and libraries, not by file type and not by "public' or 'non public', index.php is an exception. This is for easier development.
expose onle the minimal set of files in a public web directory and make everything else unaccesable to the web. Symlinking files that need to be public for the components and libs to a public location, but still mirroring the layout. So the component and library structure is also visible in the resulting html code in the links, which might help development.
git usage should be safe and always work. it would be ok to follow some procedure to add a symlink to git, but after that checking them out or changing branches should be handled safely and clean
How will git handle the symlinking of the single files correctly, is there something to consider?
When it comes to images I will need to link directories, how to handle that with git?
component/
comp3/
comp3.class.php
img/
img1.jpg
img2.jpg
img3.jpg
They should be linked here:
webroot/
comp3/
img/ (symlinked ?)
If using symlinks for that has disadvantages maybe I could move images to the webroot/ tree directly, which would break the first principle for the third (git practicability).
So this is a git and symlink question. But I would be interested to hear comments about the php layout, maybe you want to use the comment function for this.
As soon as you need to reuse some set of files elsewhere, that's when you should start thinking in term of components or (in git) submodules
Instead of managing webroot, and comp, and lib within the same repo (which is the SVN or the "Centralized way" for CVCS), you define:
n repos, one per component you need to reuse (so 'img' would be a Git repo reused as a submodule within webroot, for instance)
a main project for referencing the exact revision of those submodules you need.
That is one of advantages of submodules of symlink: you reference one exact revision, and if that component has some evolutions of its own, you don't see them immediately (not until you update your submodule anyway).
With a symlink, you see whatever state is the set of files at the other end of that link.