Local update using OStree - linux

I am considering OStree for updating my embedded devices. I understand that its used primarily as an OTA solution, but I would like to update devices locally as well, by just moving files from my computer to embedded device for quick fix or debugging.
After reading docs and few articles about it, it seems impossible because the system must be read only and any change would break the deployment, if I put the files into /etc or /usr than ostree can't track them. I know I can update it locally by having OStree repository on my machine, but that would mean I have to keep track of remote repository on my machine as well so I don't overwrite the latest changes. Do I understand OStree correctly? Is there any other way than having OStree repository on my machine ?
I know I can get info by just playing with OStree but I dont wanna waste 3 days to just find its impossible and I have to use something else.

Related

Separate environments for learning or trying out vs production (sandboxes?)

Can you suggest me a way of separating learning/trying out vs production in the same computer? I am in such a place that I know a lot of JS and production ready skills whilst sometimes require probing or trying out simpler stuff or basics. I presume that a lot of engineers are also in a similar place.
This is the situation I am facing with right now.
I wanted to install redis and configure it while trying out something interested.
In a separate project I needed another clean redis configuration and installation.
In front-end side I tried and installed a few npm packages globally.
At some point I installed python 3.4 now require 3.6
At some point I installed nginx and configured it, now need another configuration and wipe the previous one out,
If I start a big project right now I feel like my computer will eventually let me down due to several attempts I previously done
et cetera, these all create friction on both my learning and exploration
Now, it crosses mind to use separate virtual box installations for trying out things, but this answer is trivial, please suggest something else.
P.S.: I am using Linux Mint.
You can install and use Docker, which is also trivial,
however, if your environment is Linux you can use LXC
There isn't really a single good answer to this sort of question of course; but some things that are generally a good idea are:
use git repos to keep the source "backed up" (obviously your local pc should not be the git server); commit your changes all the time, if you can't hold your breath for as long as the timespan between 2 commits, then you're doing it wrong (or you may have asthma, see a doctor).
Always build your project with there being not just multiple, but a variable amount of "deployments" in mind. That means not hardcoding absolute paths and database names/ports/hostnames and things like that. If your project needs database/api credentials then that should be in a configfile of sorts (or in the env); that configfile should be stored outside the codebase and shouldn't be checked into your git repos (though there can ofcourse be a config template in there).
Always have at least 2 deployments of any project actually deployed. Next to the (obvious) "live"/"production" deployment, which your clients/users use, you want a "dev"-version for yourself where you can freely shit the bed, and for bigger projects you may well want multiple. Each deployment would have its own database, and it's own copy of the code/assets.
It can be useful to deploy everything inside podman or docker containers, that makes it easier to have a near-identical system in both development and production (incase those are different servers), but that may be too much overhead for you.
Have a method (maybe a script) that makes it very easy to deploy updates from your gitrepo or dev-deployment, to the production deployment. Based on your description, i'm guessing if a client tells you she wants some minor cosmetic changes done, you do them straight on the live version; very convenient and fast, but a horrible thing in practice. once you switch from that workflow to having a seperate dev-deploy, you'll feel slowed down by that (which you are), but if you optimize that workflow over time you'll get to the point where you could still deploy cosmetic changes in a minute orso, while having fully separated deployments, it is worth the time investment.
Have a personal devtools git repo or something similar. You're likely using an IDE such as VS code ? Back up your vs code user config in that repo, update it reasonably frequently. Use a texteditor, photoshop/editor, etc etc, same deal. You hear that ticking sound ? that's the bomb that's been placed on your motherboard. It might go off tonight, it might not go off for years, but you never know, always expect it could be today or tomorrow, so have stuff backed up externally and/or on offline media.
There's a lot more but those are some of the basics that spring to mind.
I though Docker was only for containerizing your app with all the installation files and configurations before pushing to the production
Docker is useful whenever you need to configure the runtime environment in an isolated manner. Production, local development, other environments - all need the same runtime. All benefit from the runtime definition and isolation that docker provides. Arguably docker is even more useful in workstation-centric development, than it is in production.
I wanted to install redis and configure it while trying out something interested.
Instead of installing redis on your os directly, run the preexisting docker image for redis.
In a separate project I needed another clean redis configuration and installation.
Instantiate the docker image again and now you have 2 isolated redis servers running locally.
In front-end side I tried and installed a few npm packages globally.
Run your npm code within a nodejs docker container
At some point I installed python 3.4 now require 3.6
Different versions of python is a great use case for docker containers, which will tagged with specific python versions.
At some point I installed nginx and configured it, now need another configuration and wipe the previous one out,
Nginx also has a very useful official container.
If I start a big project right now I feel like my computer will eventually let me down due to several attempts I previously done
Yeah, it gets messy quick. That's why docker is such a great solution. Give every project dedicated services and use docker-compose to simplify the networking and building components. Fight the temptation to use a docker container for more than one service - instead stitch them together with docker networks.
Read https://docs.docker.com/get-started/overview/ to get started with docker.

git: can I issue commands from two computers mounted to same file system

I hope I can explain this in a simple way ...
The files I am adding to git is on a Linux server. I access these files from various computers, depending on where I am. Sometimes it is with a Windows machine, with a drive mapped to a network drive. Sometimes I ssh into the server.
I created my git repository while working on the Windows machine with a network drive mapped to the appropriate file system, lets call it W:. I was in W:\ when I created the repository.
When I ssh into the server the directory would be something like: \home\mydir\WORKING_DIR\
Can I now, while in my ssh session, issue git commands to update the repository on the Linux macine?
This is not an answer, but it is too long for the comments.
I'm getting to the end of my tether with git. It has now completely messed up everything. Trying to google for a solution is really fruitless. Nothing is specific enough and then when you do try something that might be relevant it just totally screws things up further.
I tried changing the path in the config file manually. But I really didn't know what to change it to. If it should be relative, then relative to what?
I tried a couple of things and ended up with /home/myname/myworkingdir/
However, now it deleted my files again and set me back to some unknown state. Fortunately I backed my files up beforehand. So I tried to copy them back into place and add them again. I get "fatal: 'myfilename and path in here' is beyond a symbolic link. I have no idea what that is supposed to mean.
git status just shows more things to be deleted.
There are probably situations where this works without any issue (e.g. git status) and others where git assumes exclusive access (e.g. attempting to commit the same change simultaneously from two computers which both have access to the same working directory).
Wanting to ask this seems like a symptom of misunderstanding the Git model, anyway. You'll be much better off with a separate working directory on each computer (or even multiple check-outs on the same computer). Git was designed for distributed, detached operation - go with that, and you'll be fine.

maintain multiple versions of Windows CE (QFEs)

We build firmware using Windows CE (6 and 7) on a Windows XP system. We often install the QFEs (CE patches/updates) from Microsoft as they are released. When we have to go back to a certain release to develop a patch, it can be a real pain because we will need to build a system with the same patch level that existed on the system at the time that the product was released. Is there any easy way to maintain a QFE history that can easily be reverted at any given time? Something along the lines of snapshotting the system state as it pertains to the CE install/QFEs at each release? We don't want to use virtual machine snapshots or anything that controls the state of anything outside of the Windows CE components for this. It is a pretty specific requirement, so I am guessing no, but perhaps someone has tackled this exact problem.
I understand that you're saying you don't want to use VMs, though I'm not entirely sure why. I'd recommend at least thinking about it.
Back when I controlled builds for multiple platforms across multiple OS versions, I used Virtual Machines for this. Each VM was a bare snapshot of a PC with the tools and SDKs installed. A build script would then pull the source for each BSP and build it nightly. They key is to maintain and archive "clean" VMs (without source) and just pitch the changes after doing builds. It was way faster and way cleaner than trying to keep the WINCEROOT for each QFE level in source control and pulling that - you have to reset the machine to zero in that case anyway to be confident of no cross-pollution between levels.

Git slow when cloning to Samba shares

We are deploying a new development platform.
We have a really complicated environment that we cannot reproduce on developer's computers so people cannot clone the GIT repository on their computer.
Instead, they clone the repository into a Mapped network drive (SAMBA share) thats is the DocumentRoot of a Website for the developer in our servers
Each developer has is own share+DocumentRoot/website and so they cannot impact people this way.
Developers have Linux or Windows as Operating system.
We are using 1Gbits/sec connection and GIT is really slow comparing to local use.
Our repository size is ~900 MB.
git status on samba share takes about 3mins to accomplish, that's unusable.
We tried some SAMBA tuning, but still, it's really slow.
Has someone an idea?
Thank you for time.
Emmanuel.
I believe git status works by simply looking for changes in your repository. It does this by examining all of the files and checking for ones that changed. When you execute this against a samba, or any other share, it's having to do the inspection over a network connection.
I don't have any intimate knowledge of the git implementation but my imagination is that it essentially boils down to
Examine all files in directory
Repeat for every directory
So instead of creating a single persistent connection to the share it's creating one for every single file in the repository and with a 900MB share that's going to be slow even with a fast connection.
Have you considered having the following work flow instead?
Have every developer clone to their local machine
Do work on the local machine
Push changes to their share when they need to deploy / test / debug
This would avoid the use of git on the actual share and eliminate this problem.

Edit files on server with Eclipse

I'm trying to figure out how to do this with Eclipse. We currently run SVN and everything works great, but I'd really like to cut my SSH requests in half and use Eclipse to modify some files directly on the server. I'm using the below build of eclipse... how can I do this?
Eclipse for PHP Developers
Build id: 20100218-1602
Update
I have no intention of eliminating SVN from the equation, but when we need to make a hotfix, or run a specific report or function as a one-time thing, I'd much rather use Eclipse than terminal for modifying that kind of thing.
Have a look at How can I use a remote workspace over SSH? on the Eclipse wiki. I'm just quoting the summary below (read the whole section):
Summing up, I would recommend the
following (in order of preference):
VNC or NX when available remotely, Eclipse can be started remotely and
the network is fast enough (try it
out).
Mounted filesystem (Samba or SSHFS) when possible, the network is fast
enough and the workspace is not too
huge.
rsync when offline editing is desired, sufficient tooling is
available locally, and no merge issues
are expected (single user scenario).
RSE on very slow connections or huge workspaces where minimal data
transfer is desired.
EFS on fast connections when all tooling supports it, and options
like VNC or mounted filesystem or
rsync are not available.
But whatever you'll experiment, don't bypassing the version control system.
You could use something like SSHFS, but really, it's a better idea to use some kind of source control system instead of editing files directly on the server. If Subversion isn't sufficient, perhaps you might try a DVCS like Git or Mercurial.

Resources