Using Curl to push build to Linux/Hadoop environment with SCP protocol result in stale build delivered - linux

This is an interesting issue that I couldn't identify the cause is from Curl, or SCP or Linux/Hadoop.
Current environment has following command to push a build to Linux/Hadoop environment.
curl -k -v scp://this.is.a.fake.url.com/linux/mount/drive/to/hadoop my-build.app
After providing the correct username and password, the build pushed successfully.
However when I check content of the build, it is a file from previous release (an old version that was uploaded before). It almost feels like there is buffer mechanism either by Curl or linux/hadoop to keep old build (which must be stored somewhere)
I also found an interesting observation that if I delete existing build in Hadoop/Linux before the CURL command, the issue never occur again. So the problem is when using Curl to upload and replace existing file, make fresh upload without existing file is always successful
Just wondering anyone had the similar experience.

Well, HDFS files are read only. You don't modify files, you either append, or replace --> (create new, Delete old, rename to the same file name.) This is consistent with what you are seeing, and likely there is a lost error message in your mount tooling. (As it's not possible to modify it must be silently failing)

Related

Yocto local repository

In our office, employees use yocto for project development activity & all will be downloading from source repository.
I want to setup a repository kind of server (just like apt-cacher), where all client machine will connect local repository & download whatever is required. Is this possible?
Please correct if I am asking something wrong or understood wrong.
One idea is to create a local cloud drive and attach that to the computer as a folder.
In /conf/local.conf, change
SSTATE_DIR = "/path/to/your/sstate-repository"
DL_DIR ?= "/path/to/your/download/repository"
Please note that sstate will build up from time to time so create a cron job to delete the file in there using this command:
find ${sstate_dir} -name 'sstate*' -atime +3 -delete; fi
More information can be found on page 27 HERE
Assuming that the only thing you care about here is sources (and not built packages) you should probably take a look at the PREMIRRORS variable. One thing that I'm not clear about in your question is "source repository" (accompanied with git label), your project is typically built from lots of components, about 95% of those come from external sources while some come from internal repositories (Git/SVN/whatever). Internal repositories are usually not a big problem, you have them nearby, they tend to work and everyone involved should have some access to them anyway. Most of the problems actually occur with external fetching, that's where mirroring is handy.
The way I usually set up things wrt source file management is as follows:
set up some internal FTP server, say "ftp://oe-src.example.com/" (with anonymous read-only access)
use DL_DIR ?= "${HOME}/sources" in your local.conf (which is the way it used to be way back in OE Classic days), it's not strictly necessary, it's just that I like having an ability to clean up build directory without wiping downloaded source files at the same time
set PREMIRRORS variable in the local.conf like this:
PREMIRRORS = "\
git://.*/.* ftp://oe-src.example.com/ \n \
ftp://.*/.* ftp://oe-src.example.com/ \n \
http://.*/.* ftp://oe-src.example.com/ \n \
https://.*/.* ftp://oe-src.example.com/ \n \
"
add an action to your CI tool to synchronize (in "add to" mode, not removing old sources) ~/sources with your FTP after the build (via FTP/SCP/rsync/whatever)
This way you always have a nice set of source files (BTW, also including tarballs of VCS checkouts for your internal software, so it somewhat helps reducing the load on your internal VCS) on FTP and most of the source file requests get satisfied with this FTP, so you no longer have problems with missing sources/broken checksums/slow downloads/etc.
The only downside that I see is that this FTP is open to everyone and it has all the sources including your internal ones, which may or may not be problem depending on your security policies. This can be mitigated by using per-project SFTP mirrors, which would incur additional overhead of user key management, but I've also done such setups in practice, so when it's needed it can be done.

JSON must be no more than 1000000 bytes

We have a Jenkins-Chef setup with a QA build project to a website for a client. The build gets the code from Bitbucket, and a script uploads the cookbooks from the Chef Client to the Chef Server.
These builds ran fine for a long time. Two days ago the automated and manual builds started failing with the following error (taken from the Jenkins console output):
Updated Environment qa
Uploading example-deployment [0.1.314]
ERROR: Request Entity Too Large
Response: JSON must be no more than 1000000 bytes.
From what I understand, JSON files are supposed to be related to nodejs which is what the developers use on this webserver.
We looked all over the config files for Jenkins, the Chef-Server and the QA server. We couldn't find a way to change this 1MB limit that is causing this error.
We tried changing client_max_body_size, didn't work.
We checked the JSON files size, non of them reach this limit.
Any idea where we can find a solution? Can this limit be changed? Is there anything we can do (Infrastructure wise) or should this be fixed from the developer side?
So first of all, the 1M value is more or less hardcoded, the chef-server is not intended to store large objects.
What happens is before uploading a cookbook, a json file with it's information is created, as this file will be stored in DB and indexed it should not exceed a too large size to avoid performances problems.
The idea is to upload to the chef-server only what is absolutely necessary, strip CVS directory, any IDE build/project file, etc.
Best solution to achieve it simply is using the chefignore file. It has to be created just under the cookbook_path.
The content of this is wildcard matches to ignore while uploading the cookbook so an example one could be:
*/.svn/* # To strip subversion directories
*/.git/* # To strip git directories
*~ # to ignore vim backup files

OwnCloud Remove all files prompt

I have a owncloud server and the owncloud desktop client.What I want to do is to be able to delete things server wise and have it automatically delete from the pc. The problem is that the owncloud client displays a warning message of "Remove All Files"? with the choices of Remove all files or to keep files when the files are deleted from the server. Is there a way to not have the prompt come up and automatically remove all files?
In the version 2.2.3 (maybe earlier), you can change the configuration file to disable the prompt.
See the code where the prompt is invoked and the code showing the configuration file property.
If you edit (on Windows): c:\Users\myuser\AppData\Owncloud\owncloud.cfg and add the following, under the [General] section, you will no longer get the prompt.
promptDeleteAllFiles=false
The short answer: You cannot change this currently.
The long answer: The dialog was added as a safe-guard because there were cases where you could lose all your files unintentionally, e.g. if your admin re-created your account and left it empty. The client would assume the files had gone and would replicate this (it could not know better), so it would replicate the data removal locally. The code is still there today just to be safe.
If you are fearless, you can patch Folder::slotAboutToRemoveAllFiles(). Alternatively, you could open a bug report so we can solve this for everyone. What is your motivation to be able to do this without a prompt?
PS: The sources can be found on GitHub. URL and build instructions at http://doc.owncloud.org/desktop/1.5/building.html.
I have a script that processes the files that someone drops into ownCloud and it will then move them to the final storage place. However, this prompt stops the client from syncing until I manually log in to acknowledge it... I guess I will learn how to patch this.. Dropbox doesn't do this. Google Drive doesn't do this. But since I can't use cloud services (compliance issues), I have to use this solution until I can build a new secure upload means.

Deploy Mercurial Changes to Website Hosting Account

I want to move only the website files changed since the published revision to a hosting account using SSH or FTP. The hosting account is Linux based but does have have any version control installed, so I can't simply do an update there, and the solution must run on the local development machines.
I'm essentially trying to do what http://www.deployhq.com/ does, but for free. I want to publish changes without having to re-upload everything or manually choose the files to move. I'm open to simply using a bash script that compares versions and copies each file (how? not that great with bash) since we'll be using Linux for development, but something with a web interface would be nice.
Thanks in advance for the help!
This seems more like a job for rsync than one for hg, given that that target doesn't have hg installed.
Something like so:
rsync -avz /path/to/local/files/ remote_host:/remote/path/
This would transfer all files, recursively (-r), from .../local/files/ and place them in /remote/path. The -az compresses and perserves file attributes.
rsync takes care of only transferring files that have changed. Be sure to watch for trailing slashed when specifying source paths, they matter (see the link above).

SVN: Repository locked and svn cleanup command fails

I use putty connect to my Linux server, and checkout data from SVN server, I set the checkout process running in background. When I exit putty shell, the checkout was still running.
The next time I login and continue checkout with the at the same directory, following message is show:
svn: Working copy 'scon_project' locked
svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for details)
But when I run svn cleanup, still encounter problem like this:
svn: In directory 'var/data'
svn: Error processing command 'modify-wcprop' in 'var/data'
svn: 'var/data/logo.jpg' is not under version
But the var/data/logo.jpg actually exists in the repository.
What's the matter, and how can I solve it? Thanks!
If you just exit Putty your checkout will not continue in the background; it will most likely hang wherever it was at the time, with any files that were being worked on remaining locked. This might cause the unpredictable behavior you've been seeing on 'clean'.
You can get around this by using the GNU Screen utility, which allows your session to stay alive when you close Putty, and tends to be included in the package managers of most Linux distros.
You can do lots of things with Screen, and the man pages are vast, but for this purpose you should only need to do the following:
screen
You're now in a new terminal and can do what you need to do, and close Putty if you like; your programs will keep running.
After logging in again do:
screen -x
And you'll be reattached to your old session.
To kill a session, hit ctrl+d, as you would to end any terminal session.
Why do you use remote connection (via Putty) to your server, holding your repository and checkout there? Can't you use any SVN client (like TortoiseSVN for Windows) on your LOCAL computer and perform all the operations on your repository here? It is much easier to solve some common problems.
SVN lock on some repository (or Local Working Copy) is a common problem and most times comes out of some error. There are many ways to unlock a locked repository. Try to Google around to find them.
According to how deeply your repository is locked, most of them may fail. In this situation, I always use so called "brute-force" mode. I.e.:
Checkout current version of repository to some new folder.
Export contents of this folder to another one (get all files without SVN meta data).
Delete all files in your first folder (the one, where you checked out repository).
Copy (or move) all files from second folder (where you exported) to first one.
Commit changes.
This is a common solution for many problems with repository, including problems with locked repository.
I strongly, again, advice you to use local SVN client on your local computer. Do not do anything remotely on the server, where you actually hold your SVN repository, unless you really have to and you are really sure, that there is no other way.

Resources