WifiLock under-locked my_lock - skmaps

I'm tring to download an offline map pack. Trying to reverse engineer the example project from the skobbler support website, however when trying to start a download the download manager crashes.
What my usecase is: show a list of available countries (within the EUR continent) and make the user select a single one, and that will be downloaded at that time. So far I have gotten a list where those options are available. Upon selecting an item (and starting the download) it crashes.
For the sake of the question I commented out some things.
Relevant code:
// Get the information about where to obtain the files from
SKPackageURLInfo urlInfo = SKPackageManager.getInstance().getURLInfoForPackageWithCode(pack.packageCode);
// Steps: SKM, ZIP, TXG
List<SKToolsFileDownloadStep> downloadSteps = new ArrayList<>();
downloadSteps.add(new SKToolsFileDownloadStep(urlInfo.getMapURL(), pack.file, pack.skmsize)); // SKM
//downloadSteps.add(); // TODO ZIP
//downloadSteps.add()); // TODO TXG
List<SKToolsDownloadItem> downloadItems = new ArrayList<>(1);
downloadItems.add(new SKToolsDownloadItem(pack.packageCode, downloadSteps, SKToolsDownloadItem.QUEUED, true, true));
mDownloadManager.startDownload(downloadItems); // This is where the crash is
I am noticing a running download, since the onDownloadProgress() is getting triggered (callback from the manager). However the SKToolsDownloadItem that it takes as a parameter says that the stepIndex starts at 0. I don't know how this can be, since I manually put that at (byte) 0, just like the example does.
Also, the logs throw a warning on SingleClientConnManager, telling me:
Invalid use of SingleClientConnManager: connection still allocated.
This is code that gets called from within the manager somewhere. I am thinking there is some vital setup steps missing from the documentation and the example project.

Related

Read in file with client-side clojurescript/re-frame app

I'm writing a client-side application which should read in a file, transform its content and then export the result. To do this, I decided on Re-Frame.
Now, I'm just starting to wrap my head around Re-Frame and cloujurescipt itself and got the following thing to work:
Somewhere in my view functions, I send this whenever a new file gets selected via a simple HTML input.
[:input {:class "file-input" :type "file"
:on-change #(re-frame/dispatch
[::events/file-name-change (-> % .-target .-value)])}]
What I get is something like C:\fakepath\file-name.txt, with fakepath actually being part of it.
My event handler currently only splits the name and saves the file name to which my input above is subscribed to display the selected file.
(re-frame/reg-event-db
::file-name-change
(fn [db [_ new-name]]
(assoc db :file-name (last (split new-name #"\\")))))
Additionally I want to read in the file to later process it locally. Assuming I'd just change my on-change action and the event handler to do this instead, how would I do it?
I've searched for a while but found next to nothing. The only things that came up where other frameworks and such, but I don't want to introduce a new dependency for each and every new problem.
I'm assuming you want to do everything in the client using HTML5 APIs (eg. no actual upload to a server).
This guide from MDN may come handy: https://developer.mozilla.org/en-US/docs/Web/API/File/Using_files_from_web_applications
It seems you can subscribe to the event triggered when the user selects the file(s), then you can obtain a list of said files, and inspect the files contents through the File API: https://developer.mozilla.org/en-US/docs/Web/API/File
In your case, you'll need to save a reference to the FileList object from the event somewhere, and re-use it later.

How to download a directory's content via ftp using nodejs?

So, i am trying to download the contents of a directory via sftp using nodejs, and so far I am getting stuck with an error.
I am using the ssh2-sftp-client npm package and for the most part it works pretty well as i am able to connect to the server and list the files in a particular remote directory.
Using the fastGet method to download a file also works without any hassles, and since all the methods are promise based i assumed i could easily download all the files in the directory simply enough, by doing something like:
let main = async () => {
await sftp.connect(config.sftp);
let data = await sftp.list(config.remote_dir);
if (data.length) data.map(async x => {
await sftp.fastGet(`${config.remote_dir}/${x.name}`, config.base_path + x.name);
});
}
So it turns out the code above successfully downloads the first file, but then crashes with the following error message:
Error: Failed to get sandbox/demo2.txt: The requested operation cannot be performed because there is a file transfer in progress.
This seems to indicate that the promise from fastGet is resolving too early as the file transfer is supposed to be over when the next element of the file list is processed.
I tried to use the more traditional get() instead but it is using streams, and it fails with a different error. After researching it seems there's been a breaking change regarding streams in node 10.x. well in my case calling get simply fails (not even downloading the first file).
Does anyone know a workaround to this? or else, another package that can download several files by sftp?
Thanks!
I figured out, since the issue was concurrent download attempts on one client connection, i could try to manage it with one client per file download. I ended up with the following recursive function.
let getFromFtp = async (arr) => {
if (arr.length == 0) return (processFiles());
let x = arr.shift();
conns.push(new Client());
let idx = conns.length - 1;
await conns[idx].connect(config.sftp.auth);
await conns[idx]
.fastGet(`${config.sftp.remote_dir}/${x.name}`, `${config.dl_dir}${x.name}`);
await connections[idx].end();
getFromFtp(arr);
};
Notes about this function:
The array parameter is a list of files to download, presumably fetched using list() beforehand
conns was declared as an empty array and is used to contain our clients.
using array.prototype.shift(), to gradually deplete the array as we go through the file list
the processFiles() method is fired once all the files were downloaded.
this is just the POC version. of couse we need to add the error management to that.

nodejs vs. ruby / understanding requests processing order

I have a simple utility that i use to size image on the fly via url params.
Having some troubles with the ruby image libraries (cmyk to rvb is, how to say… "unavailable"), i gave it a shot via nodejs, which solved the issue.
Basically, if the image does not exists, node or ruby transforms it. Otherwise when the image has already been requested/transformed, the ruby or node processes aren't touched, the image is returned statically
The ruby works perfectly, a bit slow if lot of transforms are requested at once, but very stable, it always go through whatever the amount (i see the images arriving one the page one after another)
With node, it works also perfectly, but when a large amount of images are requested, for a single page load, the first images is transformed, then all the others requests returns the very same image (the last transformed one). If I refresh the page, the first images (already transformed) is returned right away, the second one is returned correctly transformed, but then all the other images returned are the same as the one just newly transformed. and it goes on the same for every refresh. not optimal , basically the resquests are "merged" at some point and all return the same image. for reason i don't understand
(When using 'large amount', i mean more than 1)
The ruby version :
get "/:commands/*" do |commands,remote_path|
path = "./public/#{commands}/#{remote_path}"
root_domain = request.host.split(/\./).last(2).join(".")
url = "https://storage.googleapis.com/thebucket/store/#{remote_path}"
img = Dragonfly.app.fetch_url(url)
resized_img = img.thumb(commands).to_response(env)
return resized_img
end
The node js version :
app.get('/:transform/:id', function(req,res,next){
parser.parse(req.params,function(resized_img){
// the transform are done via lovell/sharp
// parser.parse parse the params, write the file,
// return the file path
// then :
fs.readFileSync(resized_img, function(error,data){
res.write(data)
res.end()
})
})
})
Feels like I'm missing here a crucial point in node. I expected the same behaviour with node and ruby, but obviously the same pattern transposed in the node area just does not work as expected. Node is not waiting for a request to process, rather processes those somehow in an order that is not clear to me
I also understand that i'm not putting the right words to describe the issue, hoping that it might speak to some experienced users, let them provide clarifiactions to get a better understanding of what happens behind the node scenes

My program turns a spreadsheet into an Excel-file. But it only works for one user

I've made a larger (1000+ lines of code) App Script. It works great, for me. No other user can run it. I want them to be able to run it as well, and I can not figure out why they can't.
The problem occurs on this part:
var id = 'A_correct_ID_of_a_Google_Spreadsheet';
var SSurl = 'https://docs.google.com/feeds/';
var doc = UrlFetchApp.fetch(SSurl+'download/spreadsheets/Export?key='+id+'&exportFormat=xls',googleOAuth_('docs',SSurl)).getBlob();
var spreadsheet = DocsList.createFile(doc);
The function (and structure) was published here: other thread
function googleOAuth_(name,scope) {
var oAuthConfig = UrlFetchApp.addOAuthService(name);
oAuthConfig.setRequestTokenUrl("https://www.google.com/accounts/OAuthGetRequestToken?scope="+scope);
oAuthConfig.setAuthorizationUrl("https://www.google.com/accounts/OAuthAuthorizeToken");
oAuthConfig.setAccessTokenUrl("https://www.google.com/accounts/OAuthGetAccessToken");
oAuthConfig.setConsumerKey('anonymous');
oAuthConfig.setConsumerSecret('anonymous');
return {oAuthServiceName:name, oAuthUseToken:"always"};
}
I can't see any reason why the program only would run for one user. All the files are shared between all the users and ownership have been swapped around.
When a script uses oAuth (googleOAuth_(name,scope) in you case) it needs to be authorized from the script editor, independently from the other authorization that the user grands with the "normal" usual procedure.
This has been the object of an enhancement request for quite a long time and has no valid workaround as far as I know.
So, depending on how your script is deployed (in a SS or a Doc or as webApp) you might find a solution or not... if, as suggested in the first comment on your post, you run this from a webapp you can deploy it to run as yourself and allow anonymous access and it will work easily, but in every other case your other users will have to open the script editor and run a function that triggers the oAuth authorization process.

Developer documentation for previous releases of the Chromium\Google Chrome [duplicate]

I love what chrome offers me through its extension API.
But I always find myself lost in the jungle of what API is supported by which chrome version and when did the last chrome.experimental feature make it into the supported extensions.
The Chrome extension page gives me a nice overview what is supported, but without mentioning since what version. The same is true for the experimental API overview. Is that specific API still experimental or is it already supported in canary, for example.
If I try a sample from the chrome Samples website I usually have to change some API calls from chrome.experimental.foo to chrome.foo or find out that it is not supported at all. (What happened to chrome.experimental.alarm?) That usually means to just use the trial and error approach to eliminate all errors, until the sample works.
tldr;
So, I'm wondering is there an overview page which tells me since what version, what API is supported or when it was decided to drop an experimental API. And if there is no such page, what is the recommended way or your personal approach to deal with this situation?
On this page, the process of generating the official documentation (automatically from the Chrome repository) is described. On the same page, you can also read how to retrieve documentation for older branches. Note that the documentation is somehow incomplete: Deprecated APIs are included immediately, although they're still existent (such as onRequest).
What's New in Extensions is a brief list of API changes and updates (excluding most of the experimental APIs). It has to be manually edited, so it's not always up-to-date. For example, the current stable version is 20, but the page's last entry is 19.
If you really need a single page containing all API changes, the following approach can be used:
First, install all Chrome versions. This is not time consuming when done automatically: I've written a script which automates the installation of Chrome, which duplicates a previous profile: see this answer.
Testing for the existence of the feature:
Write a manifest file which includes all permissions (unrecognised permissions are always ignored).
Chrome 18+: Duplicate the extension with manifest version 1 and 2. Some APIs are disabled in manifest version 1 (example).
Testing whether a feature is implemented and behaving as expected is very time-consuming. For this reason, you'd better test for the existence of an API.
A reasonable manner to do this is to recursively loop through the properties of chrome, and log the results (displayed to user / posted to a server).
The process of testing. Use one of the following methods:
Use a single Chrome profile, and start testing at the lowest version.
Use a separate profile for each Chrome version, so that you can test multiple Chrome versions side-by-side.
Post-processing: Interpret the results.
Example code to get information:
/**
* Returns a JSON-serializable object which shows all defined methods
* #param root Root, eg. chrome
* #param results Object, the result will look like {tabs:{get:'function'}}
*/
function getInfo(root, results) {
if (root == null) return results;
var keys = Object.keys(root), i, key;
results = results || {};
for (i=0; i<keys.length; i++) {
key = keys[i];
switch (typeof root[key]) {
case "function":
results[key] = 'function';
break;
case "object":
if (subtree instanceof Array) break; // Exclude arrays
var subtree = results[key] = {};
getInfo(root[key], subtree); // Recursion
break;
default:
/* Do you really want to know about primitives?
* ( Such as chrome.windows.WINDOW_ID_NONE ) */
}
}
return results;
}
/* Example: Get data, so that it can be saved for later use */
var dataToPostForLaterComparision = JSON.stringify(getInfo(chrome, {}));
// ...

Resources