copy projects between different computers - brightway

assuming I have the same version of bw2 on two different machines. Can I just copy paste the contents of the folder projects.dir to transfer all the data of that project from a machine to the other without risks?

It will be easier and safer to use the utility functions backup_project_directory and restore_project_directory, though these functions are functionally the same as copy-pasting the directory (they just compress and archive to a single file). This should be completely compatible between computers running the same OS; switching architectures or to something really exotic could require exporting the SQLite3 databases to a CSV, but this, so far, a theoretical problem.

Related

Exporting a brightway project

I am changing computer and would like to keep some projects with me.
I know where the project folders/files are located (C:\Users\AwesomeUser\AppData\Local\pylca\Brightway3), and hence where to copy them, but am unsure how to add them to projects.db
What is the best practice for moving projects across computers?
You can use the backup_project_directory and restore_project_directory functions in Brightway2-IO (see source code for usage).

What's the advantages of using file system to organize our codes

It is 2017, and as far as I know, the way programmers organize their codes have not changed. We distribute our codes into files and organize them with a tree structure (nested directories and files). When codebase is huge, and the relations between classes/components are complex, this organization approach gives me the inefficient impression. With more files, either one directory has more files in it or the depth of directories increases. And since we handle the directories directly, navigation costs me time and effort without tools like search.
Figure: A complex UML from https://github.com/CMPUT301W15T09/Team9Project/wiki/UML
We can use CAD to design/draw complex things; mind map can be created in a similar manner. For these, we do not need to deal with file systems. Can't we have something similar and hide file system in a black box? Why the fundamental organization methods have not evolved for so long a time.
So I wonder, what's the advantages that keeps us from getting a new way? What's the inherit advantages of using file system to organize our codes.
Different on-disk representations of source-code have been tried (e.g. how Flash stores ActionScript inside binary .fla files) and they're generally unpopular. No-one likes proprietary file formats. It also means you can't use text-based source control systems like Git, which means you can't do a text-merge to resolve change conflicts.
We store source code in files in a tree structure (e.g. one OOP class or procedural module per file), with nested namespaces represented by nested directories because it's intuitive (and again, for better cohesion with source-control systems).
Some languages enforce this, like Java, for example, that requires the source file be named the same as the class it contains and be in the same directory name as its containing package. For other languages like C# and C++ it just makes sense - because otherwise it's confusing to someone who might be new to your codebase when they see class TurboEncabulator inside a file named PrefabulatedAmulite.cs.

Keep installer size down by reusing files / components

Let's say I have 2 features, both use abc.dll, and both reference it from their respective current directories.
So the output will look like this :
Feature1
abc.dll
Feature2
abc.dll
I've created 2 components for this. In reality I have many features and many dll's that are shared, and my installer size is nearly 1GB.
What I am looking for is a smarter way to do this, using IS 2015 professional.
What I've looked at so far:
Merge modules: Not sure if this would work, also it means I need to maintain the merge modules manually should files be upgraded.
DuplicateFile, via direct editor, but this wouldn't work because there is no way to have this bound to a feature, only a component.
A hidden feature which would install the shared files to the target system, then a post script which would copy these files to their respective features, and delete the folder of this feature.
Is there a best practice method to implement what I need?
The most suitable approach in this case is, indeed, merge modules. I am not sure why the concern about maintaining them - you should have an automated build process that creates all merge modules and then builds your main installer with the newly created modules.
However, in my opinion, merge modules are a bit cumbersome to use if you have a lot of custom actions.
An alternative to merge modules - assuming you are using a Windows Installer project - is using small MSI packages which you "chain" to your main installer (you can chain multiple packages with different conditions and supply different properties). Here too, you should have a build process which builds all those small msi packages and then builds the main installer.
If you don't want to have this kind of 'sub-projects', then the option of a hidden feature with a post action is acceptable, I've seen it, and done it, a few times. Note that if you target Windows 7 or later, instead of physically copying the files and deleting them, you can use symbolic links (using the mklink command), which helps reduce the installation's foot print on the target system (and make patching easier - you replace the original file, and all its links are updated automatically).

Is it possible for the same file to exist in more than one directory?

Just a simple question, borne out of learning about File Systems;
Is it possible for a single file two simultaneously exist in two or more directories?
I'd like to know if this is possible in Linux and well as Windows.
Yes, you can do this with either hard- or soft links (and maybe on Windows with shortcuts. I'm not sure about that). Note this is different from making a copy of the file! In both cases, you only store the same file once, unlike when you make a copy.
In the case of hard links, the same file (on disk) will be referenced in two different places. You cannot distinguish between the 'original' and the 'new one'. If you delete one of them, the other will be unaffected; a file will only actually be deleted when the last "reference" is removed. An important detail is that the way hard links work means that you cannot create them for directories.
Soft links, also referred to as symbolic links, are a bit similar to shortcuts in Windows, but on a lower level. if you open them for read or write operations, you'll read from the file, but you can distinguish between reading from the file directly, and reading from the soft link.
In Windows, the use of soft links is fairly uncommon, but there is support for it (IDK about the filesystem APIs, but there's a tool called ln just like on Unix).

Excel File opened under Ubuntu, read by R, OpenOffice

I have a bunch of Excel files which are updated on a daily basis on a Windows machine. I transfer them to a Ubuntu machine and want to open them there. Specifically, I want to read the files as a database under R.
A couple years ago, I used ODBC under a Windows machine to open Excel files through R. Is there any way, I can do this with R under Ubuntu?
I could make a database .ODB file for the corresponding XLS files using OpenOffice, but I don't know of a way to connect to a .ODB database. OpenOffice seems to have ways to connect TO databases, but no way to connect to ODB.
Thanks for any potential solutions.
You might be able to get away with using xls2csv from apt-get install catdoc to dump the Excel files to CSV. Then you can pretty much pick your poison as to how they get proccessed from there. read.csv.sql from the sqldf package could be very handy if you want to extract information using SQL statements.
I would suggest xlsx package, which has no special requirements (like xlsReadWrite and others), so it can be easily used under Linux. Although it only reads (and writes) xlsx format.
Another aprouch could be using read.xls function in gdata package, which first convers xls files to csv, and read those dataframes. You will need Perl and xls2csv installed, which is not a big problem under Linux.
Your ODBC solution should work on Linux, providing you install the uniXODBC package (for your OS, you might also need the unixODBC-devel package if compiling RODBC) and the RODBC package for R. The link Gabor provides in his comment to #daroczig's answer has some details of RODBC on linux; note the points about this being read-only on Linux and the potentially difficult set-up.
You might well be better off with the options #daroczig and Gabor suggest, but if you are familiar with ODBC you might want to give it a try on Ubuntu also.
There's another solution - host your data in a database to which both your machines have access. Postgres or MySQL will cost you nothing or MS-SQL server if you've got cash rattling around. What you seem to be trying to do is exactly what networked RDBMSs were designed for. You'll be able to play with the data in Excel and R on remote machines. Win.
Copying Excel files around is a massive fail waiting to happen. Get yourself a real RDBMS. I'd go for Postgres.

Resources