I am doing a timestamp-only build to bulk convert image files. Many of the converted image files already exist, but I like to make sure that they are all checked through each time.
How come SCons requires a database file (.sconsign.dblite) that it uses for MD5 hash data when it's instructed (via env.Decider("timestamp-newer")) to only deal with timestamps? It shouldn't need to keep a database between builds for timestamps because all the information is associated with the files themselves.
If the dblite database doesn't exist SCons reconverts all the images regardless of whether their timestamps imply they need to be rebuilt or not. The title is an example message I get when the dblite database does not exist.
If anyone can explain this I'd really appreciate it. I love the functional programming with Python, but SCons itself is not quite doing it for me at the moment.
Using "timestamp-newer", SCons actually stores the timestamp info. You can see why here:
Using Time Stamps to Decide If a File Has Changed
Try using "timestamp-match" instead.
I finally got this sorted. Brady was right about how to use SCons, but I a few days ago I eventually worked out you can also control exactly what you want built by just controlling what build commands are issued in the first place. In my case I ignored any image files for which the target file already exists using os.path.exists().
Sounds simple, but it is a conceptual difference between SCons and make, because make does not save its state between builds in the way SCons does.
Yes, I'm trying to work out the same thing, but I'm doing bulk conversion of video files which takes several days if done unnecessarily. I've already done most of it.
So I want a way to tell SCons, "For files that exist now, store their existing timestamps/MD5s, and don't rebuild unless that changes in future."
Will report back if I find a way...
I think your question is really about why there's a .sconsign.dblite when you set the decider to just check timestamp.
One reason is that it allows SCons to keep track of the method used to produce each target. If that changes, even if the timestamp doesn't, it should rebuild the affected targets.
Have you tried building a single file, and then using the sconsign utility to examine the contents of the .sconsign.dblite file?
Related
I have a directory with multiple source files of indeterminate name. The only thing I know is the file extension. I want to take each source file, and build a single target from each. The method I'm currently using is to determine the name of each source using a for loop:
targets = []
for file in listdir('.'):
if file.endswith('.xdm'):
targets += env.m4(source=file)
The advantage of doing it progrmatically like this is that the SConscript doesn't have to be maintained by the developers as they add new sources. The problem is that the targets are no longer cleaned because of something to do with dependencies that I don't entirely understand.
So my question is is there a more appropriate way to do this, using in-built SCons functionality, without relying on more traditional flow control, or should I just ensure that each of my sources is determined and list them individually in the SConscript?
Instead of fiddling with listdir I would simply use the Glob() method, as provided by SCons itself:
for file in Glob("*.xdm"):
env.m4(source=file)
This (like the example from your question) is a perfectly fine approach, since it uses the fact that SConscripts are actually Python scripts. The Glob() approach has the advantage of also finding *.xdm files that don't exist on the harddrive yet, but may get created as part of the build process later.
I wonder about the problems that you mentioned, regarding cleaning of the targets. The Q&A linked in your question above seems unrelated to me. If you experience actual "cleaning" problems with one of the approaches above, please post a separate question together with the full verbatim input and output. If it should turn out that this doesn't work out-of-the-box, I'd consider it to be a bug.
I am using electron-vue & electron-packager.
I am wondering whether I can do something like incremental updating, that is, after running an electron build command, I don't need to copy the whole electron-linux-x64 folder to my dist machine to update it to the newest, but instead I only need to copy some files in the folder.
Here is what I found up to now: I edit some code for the renderer process. Then I let electron-packager to build a package for linux. Then I find that not all the generated files have been changed. Instead, it seems that only the resources/*.asar have been changed. If I just copy these files to the dist machine, it seems that the machine updates well. But I am not sure whether some hidden files are changed too.
I would appreciate it if anyone could help me!
Since there are some upvotes to this question, and after three years I have gained more knowledge let me answer myself, making whoever reads this post can find a solution :)
Firstly, in 2020 there may already have solutions. For instance, try this and this.
Secondly, you can also use rsync to only copy the changed parts in a folder. Moreover, if a big file (say 10GB) only changes a little bit in the middle (say 1MB), it will only transfer that little bit (say 1MB). This is a general tool and can be used everywhere.
Lastly, as a side remark, manually copy your file to the development server is not a good idea. Try to automate this process. The simplest would be a several-line bash script using scp/rsync and so on, and the most complex may be Kubernetes and Docker.
I need to obfuscate my source code as best as possible so I decided to use uglifyjs2.. Now I have the project structure that has nested directories, how can I run it through uglifyjs2 to do the whole project instead of giving it all the input files?
I wouldn't mind if it minified the whole project into a single file or something
I've done something very similar to this in a project I worked on. You have two options:
Leave the files in their directory structure.
This is by far the easier option, but provides a much lower level of obfuscation since someone interested enough in your code basically has a copy of the logical organization of files.
An attacker can simply pretty-print all the files and rename the obfuscated variable names in each file until they have an understanding of what is going on.
To do this, use fs.readdir and fs.stat to recursively go through folders, read in every .js file and output the mangled code.
Compile everything into a single JS file.
This is much more difficult for you to implement, but does make life harder on an attacker since they no longer have the benefit of your project's organization.
Your main problem is reconciling your require calls with files that no longer exist (since everything is now in the same file).
I did this by using Uglify to perform static analysis of my source code by analyzing the AST for calls to require. I then loaded the source code of the required file and repeated.
Once all code was loaded, I replaced the require calls with calls to a custom function, wrapped each file's source code in a function that emulates how node's module system works, and then mangled everything and compiled it into a single file.
My custom require function does most of what node's require does except that rather than searching the disk for a module, it searches the wrapper functions.
Unfortunately, I can't really share any code for #2 since it was part of a proprietary project, but the gist is:
Parse the source text into an AST using UglifyJS.parse.
Use the TreeWalker to visit every node of the AST and check if
node instanceof UglifyJS.AST_Call && node.start.value == 'require'
As I have just completed a huge pure Nodejs project in 80+ files I had the same problem as OP. I needed at least a minimal protection for my hard work, but it seems this very basic need had not been covered by the NPMjs OS community. Add salt to injury the JXCore package encryption system was cracked last week in a few hours so back to obfuscation...
So I created the complete solution, that handles file merging, uglifying. You have the option of leaving out specified files/folders as well from merging. These files are then copied to the new output location of the merged file and references to them are rewritten auto.
NPMjs link of node-uglifier
Github repo of of node-uglifier
PS: I would be glad if people would contribute to make it even better. This is a war between thieves and hard working coders like yourself. Lets join our forces, increase the pain of reverse engineering!
This isn't supported natively by uglifyjs2.
Consider using webpack to package up your entire app into a single minified .js file, excluding node_modules:
http://jlongster.com/Backend-Apps-with-Webpack--Part-I
I had the same need - for which I created node-optimize and grunt-node-optimize.
https://www.npmjs.com/package/grunt-node-optimize
I was looking at Scons source code but cant seem to pinpoint where it is calculating for timestamp (didnt have trouble finding MD5 calculation).
And the manual page just refers as timestamp and does not go in depth of what it actual is. Maybe it is obvious for some but I am still unclear what this exactly means.
Timestamp of what?
Is the following the way Scons use for timestamp consistency?
time.ctime(os.path.getmtime(file))
basically checking for when a file is modified?
And then compare this against what at run time?
If you have ever worked with Make, the concept should be familar. Basically it compares the modification time of the source with the target, and if the source is newer, it should rebuild the target. There is also some file signature information that SCons stores internally in the .sconsign.dblite file, that I dont believe can be accessed programatically.
As can be seen in the SCons Decider() function docs, the behaviour can be configured to be one of the following (copied from the SCons man page):
timestamp-newer (This is the behavior of the classic Make utility, and make can be used a synonym for timestamp-newer)
timestamp-match
MD5
MD5-timestamp
I am use tortoiseSVN to synchronize our code.
But recent I found that there is something that is not so convenient.
When i modify a file, let's say a.jsp,
and my colleague might also modify this file, a.jsp,
and this may result in conflict, and any one of use need to checkin the his code first,
and the other one will need to update to latest version, and then resolve the
conflict one by one, and this is really error proned.
So i need some function in tortoise SVN, that can lock the a.jsp when i am editing, and prevent the other collegue to modify the file at the same time.
I have tried "lock" function in tortoiseSVN, but it doesn't work,
when i lock the a.jsp file, my colleague still can modify this file at the same time without any promotion and alert, just like " your colleague are modifying this file, please modify until the check in" ...
is there any better solution ?
Thanks in advance !!
Yes, there is a better solution, it consists of 3 parts:
Never lock, you don't need to
Don't work on the same file, or at least the exact same part of the file, at the same time as someone else
If you do, be happy to merge.
Merging is a typical part of using a source control system like SVN. You shouldn't be afraid of it, you should embrace it happily.
Generally, the merge can be automatic, unless you are working in the extra same area. In this case you must make the changes manually (but the diff tool, in TortiseSVN, will help you with this).
I would suggest that if this is happening a lot, you re-evaluate how you are assigning out work within your project.
As mentioned by others, the most flexible workflow is one where you don't need to lock. However, if you really do want to lock a file, Subversion provides this capability.
Add the property svn:needs-lock to the file(s) for which you want to require locking, and set the value to * (I don't think it actually matters what the property value is). Then when checking out or updating, Subversion will mark those file(s) as read-only in your working copy to remind you to lock them first.
You would lock the file with Subversion (I see you already found this option, but it doesn't do anything for files that don't need locking) which will make the file read-write on disk. Then make your changes. When you check in the file, Subversion will automatically release the lock (but there is an option to keep it if you like). Then the file will be read-only in your working copy again.
Note that locking in Subversion is "advisory" only. This means that anybody else can force acquisition of the lock even though somebody else might already have it. In this case, the workflow is still supported because somebody may still need to merge their changes as they would without locking.
Finally, locking files is the best way to deal with non-text files in Subversion, such as image files or MS Word files. Subversion cannot merge these types of files automatically, so advisory locking helps make sure that two people don't try to edit the same file at the same time.
Tortoise has a "merge" option that you might want to try once you update your code with his changes.
There is a practice amongst SVN users (especially agile SVN users) called "The Check-In Dance". This simple practice can cut down immensely on the amount of conflicts you have when checking code into an SVN repo. It goes like this:
When you're ready to check in some changes to the repo:
1. Do an update first to get everyone else's changes.
2. Run your build script (or just compile if you have no build script)
3. If all is well, commit your changes.
Locking causes it's own set of problems, not the least of which is that people tend to forget to "unlock" the file leaving everyone else totally unable to work if they need to change that file.
Merging conflicts in SVN is fairly easy to do, so using locking should become a non-issue for you once you get used to using TortoiseSVN.
Hope this helps,
Lee