JSON must be no more than 1000000 bytes - node.js

We have a Jenkins-Chef setup with a QA build project to a website for a client. The build gets the code from Bitbucket, and a script uploads the cookbooks from the Chef Client to the Chef Server.
These builds ran fine for a long time. Two days ago the automated and manual builds started failing with the following error (taken from the Jenkins console output):
Updated Environment qa
Uploading example-deployment [0.1.314]
ERROR: Request Entity Too Large
Response: JSON must be no more than 1000000 bytes.
From what I understand, JSON files are supposed to be related to nodejs which is what the developers use on this webserver.
We looked all over the config files for Jenkins, the Chef-Server and the QA server. We couldn't find a way to change this 1MB limit that is causing this error.
We tried changing client_max_body_size, didn't work.
We checked the JSON files size, non of them reach this limit.
Any idea where we can find a solution? Can this limit be changed? Is there anything we can do (Infrastructure wise) or should this be fixed from the developer side?

So first of all, the 1M value is more or less hardcoded, the chef-server is not intended to store large objects.
What happens is before uploading a cookbook, a json file with it's information is created, as this file will be stored in DB and indexed it should not exceed a too large size to avoid performances problems.
The idea is to upload to the chef-server only what is absolutely necessary, strip CVS directory, any IDE build/project file, etc.
Best solution to achieve it simply is using the chefignore file. It has to be created just under the cookbook_path.
The content of this is wildcard matches to ignore while uploading the cookbook so an example one could be:
*/.svn/* # To strip subversion directories
*/.git/* # To strip git directories
*~ # to ignore vim backup files

Related

Using Curl to push build to Linux/Hadoop environment with SCP protocol result in stale build delivered

This is an interesting issue that I couldn't identify the cause is from Curl, or SCP or Linux/Hadoop.
Current environment has following command to push a build to Linux/Hadoop environment.
curl -k -v scp://this.is.a.fake.url.com/linux/mount/drive/to/hadoop my-build.app
After providing the correct username and password, the build pushed successfully.
However when I check content of the build, it is a file from previous release (an old version that was uploaded before). It almost feels like there is buffer mechanism either by Curl or linux/hadoop to keep old build (which must be stored somewhere)
I also found an interesting observation that if I delete existing build in Hadoop/Linux before the CURL command, the issue never occur again. So the problem is when using Curl to upload and replace existing file, make fresh upload without existing file is always successful
Just wondering anyone had the similar experience.
Well, HDFS files are read only. You don't modify files, you either append, or replace --> (create new, Delete old, rename to the same file name.) This is consistent with what you are seeing, and likely there is a lost error message in your mount tooling. (As it's not possible to modify it must be silently failing)

Using haxe to edit remote file?

I've searched in haxelib for a library to use for remotely editing a file on a server using ssh connection with haxe, or listing files in directory..
Has any one done this with haxe?
I want to build a desktop app to create a yaml editor that will change settings files of several servers using a frontend like haxe-ui.
Ok, there are probably a lot of ways you could do it, but I would suggest separating your concerns:
desktop app to create a yaml editor
Ok, that's a fine use case for Haxe / a programming language. Build an editor, check.
change settings files (located on) several servers
Ok, so you have options here. Either
Make the remote files appear as local files via some network file system, or
Copy the files locally, edit them , and copy them back, or
Roll your own network-enabled service that runs on each server, receives commands, and modifies the files.
Random aside: Given that these are settings files, you probably also want to restart some service after changes are made.
I'd say option 2 is the easiest. There are even many ways to do that:
Use scp to both bring the settings files to a local location, edit them locally, and then push them back. And if you setup SSH keys, you won't have to bother with passwords.
Netcat is another tool for pushing bytes (aka files) over the network. It's simpler than scp, but with no security measures.
Or, get creative / crazy, and say, "my settings files will all be stored in a git repo. The 'sync' process will be a push / pull setup."
There are simply lots of ways to get this done.

Linux: Traverse Server Directory and build List of Checksums for all Files

I am running a web server with several CMS sites.
To be aware of hacks on my web server, I am looking for a mechanism, by the help of which I can detect changed files in my web server.
I think of a tool / script, which traverses the directory structure, builds a Checksums for each file and writes out a list of files, with file size, last modified date and checksums.
At the next execution, I would then be able to compare this list with the previous one and detect new or modified files.
Dies anyone know a script or tool, which can accomplish this?

How to throttle bandwidth for OverGrive in Linux (Debian)?

I've installed trickle but can't seem to get it to throttle overGrive.
$ trickle -d 500 -u 100 overgrive
trickle: Could not reach trickled, working independently: No such file or directory
trickle: exec(): No such file or directory
Is there another way to get overGrive to stop sucking up all my bandwidth while syncing?
I managed to solve the issue I had and my overGrive works fine for the last couple of weeks. It turned out that it had synchronized with some public files created by different users, and which didn't have anything in common with my Google Drive account. What was common for all these files is that it belonged to some courses and had names like: "MATH&141, Mod 01: Quiz 2 Toolkit". For some reason these files didn't have .doc extension and had symbols & and : in names, which seems to cause overGrive stack on it forever.
Anyway, I did perfom the following steps and it fixed the issue:
downloaded and installed the latest version of overgrive.
clear all trash files from Google Drive online
delete all files from your local Google Drive folder if present and restart overGrive:
.overgrive.lastsync
.overgrive.cache
turn off automatic sync, and start synchronization manually.
wait until full synchronization is finished.
You can check in your Home folder log file called .overgrive.log to see if there are some errors during the synchronization. It might happen that overGrive blocks on some specific file and try to synchronize it over and over again causing large use of download/upload.

Transfer zip file to web server

I would like to develop an app that targets everything from Gingerbread(version 2.3 API 9) to JellyBean(version 4.3 API 18).
The problem:
I need to transfer large images(40 to 50 at a time) either independently or in a zip file without the user having to click on each file being transferred. As far as I can tell I need to use the HttpClient(org.apache) that was deprecated after JellyBean.
Right now the application takes the images and zips them to a zip file prior to uploading. I can create additional zip files, for example if I have 50MB to transfer I can make each zip file about 10MB and have 5 files to be transferred if I have to. I need to transfer these files to a web server. I cant seem to find anything about transferring files after Jellybean. All the searching I've done uses the deprecated commands and the posts are 2-5 years old. I have installed andftp and transferred a 16MB zip file last night that was created by my app, but I really don't want to use that as it will require additional steps from the user. I will try andftp today and setup an intent to transfer the files to see how that works out. Supposedly andftp works until Lollipop(5.0). If there is an easier way please let me know, hopefully I've missed something about transferring files. Is there another way to do this after JellyBean?

Resources