synchronizing files and symlinks between two linux os - linux

I am facing with the bug following:
https://bugzilla.samba.org/show_bug.cgi?id=4531
rsync will always get the older symlink of the other side overwrite
the newer one on the local side.
Wayne has suggested to use unison, however it is a non-developing old
project that I have suspect to use.
What can you suggest me for ?
My main aim is to syncronize file, directories, links for 2 nodes.

unison is ok, as long as your file/folders name don't use unicode, especially cross platform. Can't hurt to give it a try.
See Here for the limitation on unicode in filename.

Related

Find path to Application Support directory in python in MacOS

I've been working on a mac app for a long time now, under the assumption that everyone has their Application Support folder located at /Users/[user_name]/Library/Application Support. However, I recently learned this is not true. I need something I can do in python to get the path to the Application Support directory on anyone's computer. Everything I've seen so far has been very outdated with respect to python. The only leads I have are some PyPi packages, PyCocoa and pyobjc-framework-Cocoa. I intend for this to work across several recent OS versions (one person has Monterey, one has Catalina, one has Big Sur, I have Mojave), and it seems that the path to the Application Support folder has changed across versions. What is the accepted way to find this folder on any computer?
Thanks!
Might be a bit slow, but the os library has a walk function which may help. You could loop through the most likely places i.e., $HOME, etc. Then read through the subfolders...something like this should identify the directories - obviously, you will need to check them against your expectation.
[x[0] for x in os.walk(top-directory)]

Best practice for paths defined in cross platform config files

I'm building a node application that has config files that are to be edited by users of the application, and they have file paths in them.
These config files will be used in Windows, Linux and MacOSX. For example, a project using this application might be developed in both Windows and MacOSX, but the config files are the same.
What is the best practice for the format of the paths in this case?
I have a couple of possibilities:
Force POSIX for the config files, and then when handling them, convert them to the current platform. The problem here is that Windows users might not like to have paths in the different format that they are used to.
Allow both formats in the config files and when parsing, convert them to the current platform. I'm afraid this might lead to issues parsing. Also, it might lead to weird mixed config files.
I think there's a lot of software out there that had the same dilemma, so I'm wondering if there's some best practice out there.
Thank you for your help!
Edit: paths are only going to be relative, no absolute paths. If someone puts an absolute path, then that config can't be used in different OSs anyway, so it's ok.
Converting between / and \ on the fly should not be a big issue, dealing with the root of the path is where things get problematic.
I would suggest that you add support for some kind of string expansion so that users can specify paths relative to known locations. Something like this perhaps:
imagedir=%config%/myimages
fallbackimagedir=%appdir%/images
You should also try to deal with full paths.
I would suggest that you convert Windows drive letter paths like c:\foo to /c/foo when reading the config on POSIX systems. This is easy since the string length is the same but it does assume that /c has been set up to be something useful.
Reading a POSIX path on Windows is trickier, when given a path that begins with / you have to prefix it with something. What that something is, is up to you, perhaps the drive letter root of the drive where your application is installed.

Will writing C in both Windows and Linux cause compiling problems?

I work from 2 different machines. One is Windows and the other is Linux. If I alternately work on the same project but switch between both OSes, will I eventually run into compiling errors? I ask because maybe there are standards supported by one but not by the other.
That question is a pretty broad one and it depends, strictly speaking, on your tool chain. If you were to use the same tool chain (e.g. GCC/MinGW or Clang), you'd be minimizing the chance for this class of errors. If you were to use Visual Studio on Windows and GCC or Clang on the Linux side, you'd run into more issues alone because some of the headers differ. So once your program leaves the realm of strict ANSI C (C89) you'll be on your own.
However, if you aren't careful you may run into a lot of other more profane errors, such as the compiler on Linux choking on the line endings if you didn't tell your editor on the Windows side to use these.
Ah, and also keep in mind that if you want to actually cross-compile, GCC may be the best choice and therefore the first part I mentioned in my answer becomes a moot point. GCC is a proven choice on both ends. And given your question it's unlikely that you are trying to write something like a kernel mode driver - which would be fundamentally different.
That may be only if your application use some specific API.
It is entirely possible to write code that works on both platforms, with no issues to compile the code. It is, however, not without some difficulties. Compilers allow you to use non-standard features in the compiler, and it's often hard to do more fancy user interfaces (even if it's still just text) because as soon as you start wanting to do more than "read a line of text as it is entered in a shell", it's into "non-standard" land.
If you do find yourself needing to do more than what the standard C library can do, make sure you isolate those parts of the code into a separate file (or a couple of files, one for Linux/Unix style systems and one for Windows systems).
Using the same compiler (gcc) would help avoiding problems with "compiler B doesn't compile code that works fine in compiler A".
But it's far from an absolute necessity - just make sure you compile the code on both platforms and with all of your "suppoerted" compilers often enough that you haven't dug a very deep hole that is hard to get out of before you discover that "it's not working on the other system". It certainly helps if you have (at least) a virtual machine running the other OS, so you can easily try both variants.
Ideally, you want to set up an automated system, such that when you change the code [and feel that the changes are "complete"], it automatically gets built on both platforms and all compilers you want to use. And if possible, also automatically tested!
I would also seriously consider using version control - that way, when something breaks on one or the other side, you can go back and look at what the code looked like before it stopped working, and (hopefully) find the reason it broke much quicker than "Hmm, I think it's the change I made to foo.c, lets take that out... No, not that one, ok how about the change here..." - at least with version control, you can say "Ok, so version 1234 doesn't work, let's try version 1220 - ok, that works. Now try 1228, still works - so change between 1229 and 1234 - try 1232, ah, it's broken..." No editing files and you can still go to any other version you like with very little difficulty. I have used Mercurial quite a bit, git a little bit, some subversion, and worked on a project in Perforce for a few years. All of these are good - personally, I think I prefer mercurial.
As a side-effect: Most version control systems also deal with filename and line endings in the saner way than doing this manually.
If you combine your version control system with a "automated build and test-system", such as Jenkins, you can get everything very automated. Jenkins is free and runs on both Windows and Linux, and you can use it to automatically build and test your code as and when you submit the code to the version control system.
It will not create a problem until you recompile the source code in the respective OS. If you wanna run your compiled file generated by windows(.exe or .obj), into linux or vice-versa then it will definitely create a problem and wont be possible. But you can move you source code (file with extension .c/.c++) into any of the os. And sometimes it also create problems with different header files, so take care of that also. Best practice is to use single OS for you entire project, avoid multiple os until it is extremely necessary.

File systems with support to directory hard-linking

Does anybody know one? preferrably with linux implementation?
alternatively, does anybody know how much effort would it take to add it in any open-source implementation? (i mean: maybe it's enough to change an if statement, maybe i have to go carefully trhough the whole fs implementation adding tests; do you have that notion? ).
thanks....
HFS+ allows directory hardlinks in OSX 10.5. Only TimeMachine can create them since OSX 10.6, and HFS+ does some sanity checking that they do not introduce cycles.
However, Linux will not read them. Besides filesystems, this could be enforced at the VFS layer. Even if there are no cycles, some userspace tools rely on having no directory hard links (eg, a GNU find optimisation that lets it skip many directories; it can be disabled with -noleaf ).
Technically nothing keeps you from opening /dev/sda with a hex editor and creating one. However everything else in your system will fall apart if you do.
The best explanation i could find is this quote from jta:
User-added hardlinks to directories
are forbidden because they break the
directed acyclic graph structure of
the filesystem (which is an ASSERT in
Unixiana, roughly), and because they
confuse the hell out of
file-tree-walkers (a term Multicians
will recognize at sight, but Unix
geeks can probably figure out without
problems too.

Multiple Haskell cabal-packages in one directory

What is the recommended way of having several cabal packages in one directory?
Why: I have an old project with many separable modules. Since originally they formed just one program it was, and still is, handy to have them in same directory for easy compiling.
Options
Just suffer and split everything, including VCS holding the stuff, into different directories?
Hack cabal until it is happy with multiple .cabal files in same directory?
Make another subdirectory for each module and put .cabal files there along with symlinks to original pieces of code?
Something smarter? What?
I'd have to recommend option 1 or 3 for cleanliness. I'm not sure how to get around this, if there is even a way to get around this.
I'd say a modified option of 1: subdirectories for everything, no symlinks, but keep everything under a single VCS.
This problem is on the issue list for Cabal 2.
I would recommend that this is exactly what workspaces in Leksah were designed to do. Just get your hands on Leksah and then the rest will sort itself out.

Resources