I am using electron-vue & electron-packager.
I am wondering whether I can do something like incremental updating, that is, after running an electron build command, I don't need to copy the whole electron-linux-x64 folder to my dist machine to update it to the newest, but instead I only need to copy some files in the folder.
Here is what I found up to now: I edit some code for the renderer process. Then I let electron-packager to build a package for linux. Then I find that not all the generated files have been changed. Instead, it seems that only the resources/*.asar have been changed. If I just copy these files to the dist machine, it seems that the machine updates well. But I am not sure whether some hidden files are changed too.
I would appreciate it if anyone could help me!
Since there are some upvotes to this question, and after three years I have gained more knowledge let me answer myself, making whoever reads this post can find a solution :)
Firstly, in 2020 there may already have solutions. For instance, try this and this.
Secondly, you can also use rsync to only copy the changed parts in a folder. Moreover, if a big file (say 10GB) only changes a little bit in the middle (say 1MB), it will only transfer that little bit (say 1MB). This is a general tool and can be used everywhere.
Lastly, as a side remark, manually copy your file to the development server is not a good idea. Try to automate this process. The simplest would be a several-line bash script using scp/rsync and so on, and the most complex may be Kubernetes and Docker.
Related
We have a large code base in Perforce. I would like to do the following nightly, automatically.
- Copy some view of "latest" into two (or more) workspaces, streams, or even just into some other folders not under perforce control.
- Check everything out (if p4 used) and "compile it", (where "compile" may include changing most files, thus the need to have them be writable.
- Rinse repeat the following "night" with a fresh "latest".
I know how to do this via simple copying things out, but would like the nightly modified code to be able to be "seen" from other machines, by other people, thus maybe the stuffing of things back into perforce.
I know how to do this with P4 workspaces.
Just wondering if p4 streams (tasks?) are a better approach, or for any other alternate recommendations.
Using workspaces is indeed what you want to do. The benefit of using streams would mostly be in simplifying the task of generating and managing these workspaces.
Do you want to keep the changes made by the build machine isolated from the mainline that everyone else uses? Should those changes also be isolated from other builds? Or do you want to make sure that everyone gets the changes ASAP and that they make their way into all other variants of the code? These are good questions to ask as you're setting this up and the answers should influence what you do.
I am a 65 year old "newbie" and generally use default options when downloading. Python.org wants to download to an obscure directory such as
C:\Users\Facdev\AppData\Local\Programs\Python\Python36-32".
Is there anything wrong with downloading instead to "C:"?
This should not pose a problem, as long as the directory is specified correctly in you PATH.
It is OK to modify the location where you will download and install Python. However, I would advise against doing so if you are unfamiliar with how system environment variables and PATH locations work in Windows.
Why does it matter?
Once you have the python executable (in your case Python-3.6.0.exe) on your system, your computer needs to know where it is in order to execute it! If you place the executable in a location like the main directory on the C: drive your computer does not care. Your computer also does not care if the executable is deep down in the AppData\ directory.
By changing the default behavior you run a risk when troubleshooting unexpected behavior that instructions will not be written for your situation. This is OK as long as you understand what you will need to change in order to apply the troubleshooting techniques listed on documentation, blog posts, and forums.
Because of those factors and this being a new process for you, I recommend sticking to the default. You can change the location later, once you understand what doing so means. Learning to program can be frustrating and trying to grasp managing the software environment only adds to the frustration. Tackle that issue later.
Good luck on your new adventure! I hope you learn to enjoy writing your own programs in python!
I need to obfuscate my source code as best as possible so I decided to use uglifyjs2.. Now I have the project structure that has nested directories, how can I run it through uglifyjs2 to do the whole project instead of giving it all the input files?
I wouldn't mind if it minified the whole project into a single file or something
I've done something very similar to this in a project I worked on. You have two options:
Leave the files in their directory structure.
This is by far the easier option, but provides a much lower level of obfuscation since someone interested enough in your code basically has a copy of the logical organization of files.
An attacker can simply pretty-print all the files and rename the obfuscated variable names in each file until they have an understanding of what is going on.
To do this, use fs.readdir and fs.stat to recursively go through folders, read in every .js file and output the mangled code.
Compile everything into a single JS file.
This is much more difficult for you to implement, but does make life harder on an attacker since they no longer have the benefit of your project's organization.
Your main problem is reconciling your require calls with files that no longer exist (since everything is now in the same file).
I did this by using Uglify to perform static analysis of my source code by analyzing the AST for calls to require. I then loaded the source code of the required file and repeated.
Once all code was loaded, I replaced the require calls with calls to a custom function, wrapped each file's source code in a function that emulates how node's module system works, and then mangled everything and compiled it into a single file.
My custom require function does most of what node's require does except that rather than searching the disk for a module, it searches the wrapper functions.
Unfortunately, I can't really share any code for #2 since it was part of a proprietary project, but the gist is:
Parse the source text into an AST using UglifyJS.parse.
Use the TreeWalker to visit every node of the AST and check if
node instanceof UglifyJS.AST_Call && node.start.value == 'require'
As I have just completed a huge pure Nodejs project in 80+ files I had the same problem as OP. I needed at least a minimal protection for my hard work, but it seems this very basic need had not been covered by the NPMjs OS community. Add salt to injury the JXCore package encryption system was cracked last week in a few hours so back to obfuscation...
So I created the complete solution, that handles file merging, uglifying. You have the option of leaving out specified files/folders as well from merging. These files are then copied to the new output location of the merged file and references to them are rewritten auto.
NPMjs link of node-uglifier
Github repo of of node-uglifier
PS: I would be glad if people would contribute to make it even better. This is a war between thieves and hard working coders like yourself. Lets join our forces, increase the pain of reverse engineering!
This isn't supported natively by uglifyjs2.
Consider using webpack to package up your entire app into a single minified .js file, excluding node_modules:
http://jlongster.com/Backend-Apps-with-Webpack--Part-I
I had the same need - for which I created node-optimize and grunt-node-optimize.
https://www.npmjs.com/package/grunt-node-optimize
I am doing a timestamp-only build to bulk convert image files. Many of the converted image files already exist, but I like to make sure that they are all checked through each time.
How come SCons requires a database file (.sconsign.dblite) that it uses for MD5 hash data when it's instructed (via env.Decider("timestamp-newer")) to only deal with timestamps? It shouldn't need to keep a database between builds for timestamps because all the information is associated with the files themselves.
If the dblite database doesn't exist SCons reconverts all the images regardless of whether their timestamps imply they need to be rebuilt or not. The title is an example message I get when the dblite database does not exist.
If anyone can explain this I'd really appreciate it. I love the functional programming with Python, but SCons itself is not quite doing it for me at the moment.
Using "timestamp-newer", SCons actually stores the timestamp info. You can see why here:
Using Time Stamps to Decide If a File Has Changed
Try using "timestamp-match" instead.
I finally got this sorted. Brady was right about how to use SCons, but I a few days ago I eventually worked out you can also control exactly what you want built by just controlling what build commands are issued in the first place. In my case I ignored any image files for which the target file already exists using os.path.exists().
Sounds simple, but it is a conceptual difference between SCons and make, because make does not save its state between builds in the way SCons does.
Yes, I'm trying to work out the same thing, but I'm doing bulk conversion of video files which takes several days if done unnecessarily. I've already done most of it.
So I want a way to tell SCons, "For files that exist now, store their existing timestamps/MD5s, and don't rebuild unless that changes in future."
Will report back if I find a way...
I think your question is really about why there's a .sconsign.dblite when you set the decider to just check timestamp.
One reason is that it allows SCons to keep track of the method used to produce each target. If that changes, even if the timestamp doesn't, it should rebuild the affected targets.
Have you tried building a single file, and then using the sconsign utility to examine the contents of the .sconsign.dblite file?
As the title says, I want to have a build tool that quite much stays out of my way.
I would rather want to specify rules, rather than steps in the build process. I wan to say that I want a binary file with a name placed in the root directory of my project, .o files should go in an obj/tmp dir and the source is in the Source-directory.
I do NOT want to tell it that it is this'n'that file as I keep adding new files rather quickly, it should just scan the source directory (and its subdirectories) looking for Ragel (.rl) and C++ code (.cxx) and doing what's necessary to make all into an executable.
I have looked into many tools, like auto{make,conf,header} (Did not really like that I placed the files it wanted in a subdir of project root, eclipse did not like that either), CMake (Seems like I have to add all source-files myself, and is quite much a variation of autotools in my eyes). I have also read about ant, maven (I am also allergic against XML, it's a good format to serialize data for applications, not so much for humans. I would prefer YAML) and others on WikiPedia. And I have seen tools which seems good but which require to be set up as a webserver which is kinda overkill.
Also, I really need the ability to be able to work offline without internet connection!.
Right now it seems like the best option is to make a little script that finds all .cxx files and write an Unity.cxx and builds that one with G++, which probably is quite fast but to much an ugly hack, I guess.
Bonus Points:
Fast builds
Ability to type build test-1 or something and it will build and directly run test-1
Multi-core builds (i.e. faster builds)
Does really not interrupt my train of thought
CMake is great. It's free, cross-platform, and reasonably well documented. It supports "out of source builds", meaning none of the build files are placed in the source directory. That makes source control a bit easier. It can be set up to find new files (globbing). Fast?...It generates make files...after that it's up to your compiler. Multicore...again, more a function of the compiler. I've used CMake on Windows, Linux, and Mac...it just works.
Another that I haven't tried but have read about and plan to test is premake... http://industriousone.com/sample-script
cake from CoffeeScript is quite good, and I'm writing a similar tool using Lua myself.
CMake and premake Ain't build/maketools, they are build/make-descriptor generators; which may fit a large number of projects that ain't changing too much. But not for project where rapid prototyping is a key.
Right now, I'm doing a project where the browser updates when you hit the save-button in your text editor; You do not need to go to the browser and hit F5 (Which would cause a small delay while the browser load in everything again, and you would most likely loose the state of the page, like say that you have an menu open, and wish to tweak the look of the menu. You would be forced to navigate there again in your RIA).