Android Studio - Generating a signed bundle leaves out local json file - android-studio

I have a local json file which is critical to the app. The file is around 28mbs in size. I have it stored under res>raw. However, when I generate a signed bundle from release variant, the build is only 4 mbs. If I generate a signed bundle from debug, it's 41 mbs and does include the json file.
I've tried using source sets but no effect. This is what my app structure looks like.
Any idea what I might be missing?

After much research, I found out that the maximum size of a raw file is 1023kb. Anything more than that and you're left with unexpected behaviour. I fixed it by breaking the given json file (~28mb) into 30 small json files.

Related

Heroku cannot store files temporarily

I am writing a nodejs app which works with fonts. One action it performs is that it downloads a .ttf font from the web, converts it to a base64 string, deletes the .ttf and uses that string in other stuff. I need the .ttf file stored somewhere, so I convert it. This process takes like 1-2 seconds. I know heroku has an ephemeral file system but I need to store stuff for such a short time. Is there any way I can store my files? Using fs.writeFile currently returns this error:
Error: EROFS: read-only file system, open '/app\test.txt']
I had idea how about you make an action, That would get font, convert it and store it on a global variable before used by another task.
When you want to use it again, make sure you check that global variable already filled or not with that font buffer.
Reference
Singleton
I didn't know that you could store stuff in /tmp directory. It is working for the moment but according to the dyno/ephemeral system, it gets cleaned frequently so I don't know if it may cause other problems in the long run.

How to debug a .zip generator algorithm?

I'm trying to implement a minimal version of .zip file generation following this spec: https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT
I don't actually need compression, I just need a way to string together a bunch of files into a single widely adopted archive format with the capability to stream in file data while streaming out the zip.
So far I'm partially successful, 7-zip and windows built in zip extractor can extract them just fine, winrar and macos built in zip extractor are giving me corrupted archive errors.
I can't for the life of me find the actual problem(s?) though, as far as I can tell the .zips are built 100% to the specification but the spec is a big wall of text and with swooping changes from one zip file version to the next along with legacy attributes taking on new functions it is tad confusing.
Does anyone know of an extraction tool that can give me more specific errors than just "archive is corrupt"?
Or perhaps a zip generation utility where I can pick and choose between all the different ways of building a zip file so I can go and compare the results byte by byte?
Does anyone know of an extraction tool that can give me more specific errors than just "archive is corrupt"?
The unzipada tool # Zip-Ada project will do exactly that
Testing archive ko.zip
raised ZIP.ARCHIVE_CORRUPTED : Bad (or no) end-of-central-directory
[C:\Ada\za\unzipada.exe]
Zip.Find_First_Offset at zip.adb:589
Unzip.Extract at unzip.adb:667
Unzipada at unzipada.adb:259
By browsing the code (like: zip.adb, line 589) you can narrow down the corrupt archive issues. For building the tool, download the sources and follow the readme.txt file. There are also pre-built binaries for Windows.

How does `aws s3 sync` determine if a file has been updated?

When I run the command in the terminal back to back, it doesn't sync the second time. Which is great! It shouldn't. But, if I run my build process and run aws s3 sync programmatically, back to back, it syncs all the files both times, as if my build process is changing something differently the second time.
Can't figure out what might be happening. Any ideas?
My build process is basically pug source/ --out static-site/ and stylus -c styles/ --out static-site/styles/
According to this - http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
S3 sync compares the size of the file and the last modified timestamp to see if a file needs to be synced.
In your case, I'd suspect the build system is resulting in a newer timestamp even though the file size hasn't changed?
AWS CLI sync:
A local file will require uploading if the size of the local file is
different than the size of the s3 object, the last modified time of
the local file is newer than the last modified time of the s3 object,
or the local file does not exist under the specified bucket and
prefix.
--size-only (boolean) Makes the size of each key the only criteria used to decide whether to sync from source to destination.
You want the --size-only option which looks only at the file size not the last modified date. This is perfect for an asset build system that will change the last modified date frequently but not the actual contents of the files (I'm running into this with webpack builds where things like fonts kept syncing even though the file contents were identical). If you don't use a build method that incorporates the hash of the contents into the filename it might be possible to run into problems (if build emits same sized file but with different contents) so watch out for that.
I did manually test adding a new file that wasn't on the remote bucket and it is indeed added to the remote bucket with --size-only.
This article is a bit dated but i'll contribute nonetheless for folks arriving here via google.
I agree with checked answer. To add additional context, AWS S3 functionality is different than standard linux s3 in a number of ways. In Linux, an md5hash can be computed to determine if a file has changed. S3 does not do this, so it can only determine based on size and/or timestamp. What's worse, AWS does not preserve timestamp when transferring either way, so timestamp is ignored when syncing to local and only used when syncing to s3.

Stream Poco Zip Compression to Poco HTTPServerResponse

I would like to directly compress a directory into a Poco::HTTPServerResponse stream. However, downloading the zip file produced by the following code leads to a corrupt archive. I do know that the below compression approach does work for locally created zip files as I have successfully done that much. What am I missing or is this simply not possible? (Poco v1.6.1)
std::string directory = "/tmp/data";
response.setStatusAndReason(HTTPResponse::HTTPStatus::HTTP_OK);
response.setKeepAlive(true);
response.setContentType("application/zip");
response.set("Content-Disposition","attachment; filename=\"data.zip\"");
Poco::Zip::Compress compress(response.send(),false);
compress.addRecursive(directory,
Poco::Zip::ZipCommon::CompressionMethod::CM_STORE,
Poco::Zip::ZipCommon::CompressionLevel::CL_MAXIMUM,
false, "data");
compress.close();
I use the same technique successfully, with only a slight difference:
The compression method and the compression level (CM and CL).
compress.addFile( cacheFile, Poco::DateTime(), currentFile.GetName(), Poco::Zip::ZipCommon::CM_DEFLATE, Poco::Zip::ZipCommon::CL_SUPERFAST );
A zip file corresponds to the DEFLATE algorithm, so when unzipping, your explorer/archive manager probably doesn't work out.
Either that, or it's pointless to use a MAXIMUM level on a STORE method (STORE non compressing by definition).
EDIT: Just tried it, actually, it's because CM_STORE internally uses headers (probably some kind of tar). Once your files have been added to the zip stream and you close it, Poco tries to order the header, and resets the position of the output stream to the start to write them.
Since it cannont be done on the HTTP output stream (your bytes are already sent!), it fails.
Switching to CM_DEFLATE should fix your problem.

Deal with ZIP-Buffer in node.js

I am building the server part of a webapp, using node.js. This involves getting data from thetvdb.com (API documentation of thetvdb).
The data comes as a zip file. HTTP download is no problem, however, parsing the file is. I actually never save the file, but just keep it in memory, as suggested in How to download and unzip a zip file in memory in NodeJs?
I have a buffer with valid data (same data as when I download the file with browser/curl...). However, adm-zip (I also tired other zip libraries, some suggest invalid zip length) can't open it. It does not show an error, but the zipEntries in the end have length of 0.
When I write out the buffer to the filesystem and open it with gui or cli tools it works.
I can't give a direkt link to the file, as it would involve my API key, however I re-uploaded it here.
I think I might have an answer for you:
Don't rely on npm install. I just ran the example that you linked to with the zip file you provided, and I get an output of "0".
I saw a comment on that other StackOverflow page, saying that the version of adm-zip on npm is not up to date. I grabbed a fresh copy of adm-zip from github, overwrote the one in my node_modules folder and reran the example code and now get the following:
...
<Actor>
<id>237811</id>
<Image>actors/237811.jpg</Image>
<Name>Peter Pratt</Name>
<Role>The Master</Role>
<SortOrder>3</SortOrder>
</Actor>
<Actor>
<id>23780s/237811.jpg</Image>
Give that a shot!

Resources