Sketch Plugin - NSFileManager contentsOfDirectoryAtPath - not a function - sketch-3

I'm trying to make a plugin for Sketch that allows me to automate the production of multilingual assets.
Right now, I select a list of languages files that kick off the process. I use NSOpen Panel to select these. For each language file, it makes a new page for that language, and finds text layers with a special tag and replaces the copy with the translated copy.
While it's doing this, I'd like it to check to see if there are any screenshots in another folder screenshots/lang, and store a list of the filepaths for those images. This is where I'm getting stuck at the moment.
Currently I have:
var screenShotPaths = [[NSFileManager defaultManager] contentsOfDirectoryAtPath:mySourcePath error:NULL];
When I try to run my Plugin in sketch in its current state, the console is saying:
(Sketch Plugin)[6547]: TypeError: fileManager.contentsOfDirectoryAtPath_ is not a function.
(In 'fileManager.contentsOfDirectoryAtPath_( mySourcePath)', 'fileManager.contentsOfDirectoryAtPath_' is undefined)
I've tried looking all over I see a lot of issues where the list isn't coming back,coming back null, etc, but I can't find anything where contentsOfDirectoryAtPath is coming back as not a function.
Is this because it's a plugin getting run by an application and navigating the filesystem is a security issue?
Any help would be much appreciated. Thank you!

A little late to the game here, but hopefully someone searching for this will find this useful.
I found that you have to use the full method: contentsOfDirectoryAtURL_includingPropertiesForKeys_options_error(url,nil,nil,nil);
I'm more familiar with JS so I'm using js syntax. Hopefully someone else can translate this to Cocoa.

Works for me: (Sketch 55)
var fileManager = [NSFileManager defaultManager];
var paths = [fileManager contentsOfDirectoryAtPath:filePath error:null];

Related

What parameters does a function takes

I am trying to create an edgeCollection via node command line. I think the db.edgeCollection does this for me. What I don't know is what extra parameters does the function take in order to create a new edge collection.
I am currently using arangojs version 2.15.9
var database = require("arangojs").Database;
var db = new database(http://user:pass#127.0.0.1:8529)
db.edgeCollection(##What should I write here to create a new edge collection?##)
It would be nice if there is a global way of knowing the parameters required by any function.
I am using vim as my code editor.
To create an edgeCollection all I needed to do was this
var collection = db.edgeCollection("new-edge");
collection.create();
This solves the first part. And I am really sorry for not looking for the answer more because there is already a thread that answers the 2nd part of the question.
show function parameters in vim
I think if I understand your question correctly you need to go with arangojs documentation.
Try this https://www.npmjs.com/package/arangojs
If you are using vim editor you lose so many suggestion opportunities provided by IDEs like eclipse,Idea or even notepad++

Saving the stream using Intel RealSense

I'm new to Intel RealSense. I want to learn how to save the color and depth streams to bitmap. I'm using C++ as my language. I have learned that there is a function ToBitmap(), but it can be used for C#.
So I wanted to know is there any method or any function that will help me in saving the streams.
Thanks in advance.
I'm also working my way through this, It seems that the only option is to do it manually. We need to get ImageData from PXCImage. The actual data is stored in ImageData.planes but I still don't understand how it's organized.
https://software.intel.com/en-us/articles/dipping-into-the-intel-realsense-raw-data-stream?language=en Here you can find example of getting depth data.
But I still have no idea what is pitches and how data inside planes is organized.
Here: https://software.intel.com/en-us/forums/intel-perceptual-computing-sdk/topic/332718 kind of backwards process is described.
I would be glad if you will be able to get some insight from this information.
And I obviously would be glad if you've discovered some insight you can share :).
UPD: Here is something that looks like what we need, I haven't worked with it yet, but it sheds some light on internal organization of planes[0] https://software.intel.com/en-us/forums/intel-perceptual-computing-sdk/topic/514663
UPD2: To add some completeness to the answer:
You then can create GDI+ image from data in ImageData:
auto colorData = PXCImage::ImageData();
if (image->AcquireAccess(PXCImage::ACCESS_READ, PXCImage::PIXEL_FORMAT_RGB24, &colorData) >= PXC_STATUS_NO_ERROR) {
auto colorInfo = image->QueryInfo();
auto colorPitch = colorData.pitches[0] / sizeof(pxcBYTE);
Gdiplus::Bitmap tBitMap(colorInfo.width, colorInfo.height, colorPitch, PixelFormat24bppRGB, baseColorAddress);
}
And Bitmap is subclass of Image (https://msdn.microsoft.com/en-us/library/windows/desktop/ms534462(v=vs.85).aspx). You can save Image to file in different formats.

What is the simplest way to create a UI test in Android Studio that can take screenshots when I need it to?

I am trying to create a UI test in Android Studio which will navigate through the various screens of my application and take screenshots when I tell it to.
I am new to Android Studio and Android programming in general; I have a decent understanding of XML and Java, but I don't know much about build files and I am not very good at using Android Studio, it seems.
I started this endeavor a couple weeks ago, and the first solution I tried was to use uiautomator. However, the documentation on that page (and seemingly just about everywhere else) is geared towards development with Eclipse, which I would like to avoid using for this project if possible.
The next thing I tried was Espresso. After I overcame some issues with implementing Espresso into my project, I was able to write tests with Espresso which would navigate through the screens of my application. However, unlike uiautomator, Espresso does not have built-in functionality to take screenshots at this time.
I first attempted to solve this problem of being unable to take screenshots with Espresso by writing custom code; as I'm still unfamiliar with Android, I wasn't really sure how to go about that, so I searched for help on the Internet (How to programmatically take a screenshot in Android?). However, I was unable to get the solutions I found to function from inside the test file.
Somebody recommended the usage of this tool: https://github.com/rtyley/android-screenshot-lib but I could not figure out how to import that into my project.
I eventually came back to uiautomator; I was still having a lot of trouble importing it into my project, and some people said that Robotium would help with that. I got Robotium to work, but I still could not import uiautomator.
It has been probably one month since I started using Android Studio, and in that time, I've had nothing but trouble simply getting the software to function properly. For the sake of brevity, I've omitted all the problems I have managed to solve on my own, but, to put it bluntly, I'm at the end of my patience.
TL;DR
If somebody could either:
-explain in the simplest possible way how to import uiautomator into an Android Studio project (I have read a lot of documentation about how to import external libraries into a project, but they all tell me to add a 'libs' folder to my project, but do not specify which type of folder to use [Java Resource Folder? Assets Folder? Module? etc.], and/or they tell me to go into Project Structure, select my app, go to dependencies, and choose "Import as Module," which does not work...)
OR
-explain how best to take a screenshot from inside of an Espresso test, including any instructions on how to import any required libraries
OR
-explain in detail some other way to create a UI test that can take screenshots...
...I would really appreciate it. I've spent days trying to figure out how to do this, and I am so frustrated. Many people have asked similar questions, but the answers are either too vague or the problems aren't close enough to my own.
Thanks!
Alright, after much trouble, I've found a very simple solution. It took me a very long time to work out, but if anyone else needs to do something similar, I'll put my conclusion here.
First of all, the testing framework that is easiest to use with Android Studio, it seems, is Espresso. Setting up Espresso is fairly simple; most of the instructions can be found here: https://code.google.com/p/android-test-kit/wiki/EspressoSetupInstructions Make sure you read it carefully -- it tells you basically everything you need to know, but I missed some important details and that caused me a lot of trouble.
If you browse around that Espresso site, it tells you just about everything you need to know about how to write Espresso tests. It was a little frustrating for me because, if I wrote a test and the test failed, my device would then have connection issues with my laptop and I would have to disconnect and reconnect the USB cord I was using. I think this had something to do with the fact that I was using a Nexus 7 with a Windows 8 laptop, which has given me some problems in other areas, so you may not encounter this issue yourself.
Now, unlike uiautomator, the documentation of which claims to have support for taking screenshots, Espresso does not have built-in support for taking screenshots. That means you'll have to figure out a different way to take screenshots. My solution was to create a new class (called HelperClass, in my case) inside my androidTest package and add this method to it.
public static void takeScreenshot(String name, Activity activity)
{
//slightly modified version of solution from http://www.ssaurel.com/blog/how-to-programmatically-take-a-screenshot-in-android/
//I added "/Pictures/" to my path because that's the folder where I wanted to store my screenshots -- you might not have that folder on your device, so you might want to replace "/Pictures/" with just "/" until you decide where you want to store the screenshots
String path = Environment.getExternalStorageDirectory().getAbsolutePath() + "/Pictures/" + name;
View v = activity.getWindow().getDecorView().getRootView();
v.setDrawingCacheEnabled(true);
Bitmap bitmap = Bitmap.createBitmap(v.getDrawingCache());
v.setDrawingCacheEnabled(false);
OutputStream out = null;
File imageFile = new File(path);
//the following line will help you find where the screen will be stored on your device
Log.v("Screenshot", "The image file path is " + imageFile.getPath());
try {
out = new FileOutputStream(imageFile);
// choose JPEG format
bitmap.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
} catch (FileNotFoundException e) {
// manage exception
} catch (IOException e) {
// manage exception
} finally {
try {
if (out != null) {
out.close();
}
} catch (Exception exc) {
}
}
}
In order for this function to work, you will also have to add the following line to your manifest file.
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
Without that, the function above will throw a FileNotFoundException every time you run it.
Finally, to call the takeScreenshot function from inside your Espresso test code, use this line (assuming you called your class HelperClass... if not, use the name of your class instead.
HelperClass.takeScreenshot("Whatever you want to call the file", getActivity());
Finding where your screenshots are stored can be a little difficult if you don't know where to look. I added a line of code to the takeScreenshot function that would print the filepath to LogCat, but I was using the file explorer on my computer to look for the screenshots on my Nexus (which was, of course, connected to the computer), and I couldn't find that path. However, I got a file explorer application on my tablet which made it very easy to find where the files were located in relation to everything else.
My solution may not be the simplest and it certainly isn't the best -- you'll fill your device up with screenshots before long if you aren't careful to delete the ones you don't need anymore, and I haven't got any idea how one would go about saving the screenshots directly to, say, a computer connected to the tablet via USB. That would certainly be helpful. However, if you really need a simple UI test that takes screenshots, and you're frustrated to no end like I was, this solution should probably help. I certainly found it useful.
I hope this helps somebody else -- it definitely solved my problems, at least for now.
Of course if you don't have all the restrictions that I did when I had to write a UI test that took screenshots, the other posts in this thread probably work much better.
You should give AndroidViewClient/culebra a try. Using culebra GUI, you can automatically generate a test case that interacts with your app and takes screenshot exactly when you indicate so.

Core Data - NSURL command and momd

I'm working on a project that uses Core Data and I can't seem to find an adequate explanation of why the following line of code in my program always returns NIL for modelURL.
NSURL *modelURL = [[NSBundle mainBundle] URLForResource:#"CoreDataBooks" withExtension:#"momd"];
This example is straight out of Apple's sample code and it actually works in their program, but I can't get it to work in mine.
Questions:
1) Does something have to be in place before I try to implement. I notice the Apple solution has a "CoreDataBooks.DCBStore" file that I do not have. I've tried a number of things to create this...No luck.
2) momd: I've read a lot about this and it seems it's quite a bit different than "mom." I understand the "d" gives the dataset additional capabilities and in some answers posted here, the author indicated to use "mom" and not "momd" without a great explanation of why. All the same, this doesn't work either.
As always, I appreciate your help!
Glenn
So -[NSBundle URLForResource:…] is returning nil. That's supposed to mean the requested resource doesn't exist.
Fire up the Finder and have a look inside the bundle. Confirm that file really doesn't exist. Is there actually a momd file (or similar) there, but by a different name? Probably want to adjust your code to match.
If no such files exist, you probably need to add your Core Data model to your build target.

Using JavaScript API commands in ActionScript file

I am new to actionscript and jsfl programming. I am using Adobe Flash Professional CS5.5 and windows 7 operating system. I am trying to execute Javascript API commands in my .as file using the MMExecute() function. When publishing the swf file the statements before and after the 'MMExecute' statement are getting executed but the Javascript command string I am using in the MMExecute function doesn't seem to get executed. I am using a basic JSFL command to just trace to the output window in flash. Also, I am publishing the swf file to the WindowsSwf folder present in the Configuration folder. The fla file I have is a blank file with nothing added to it and the code I am using is as follows.
import flash.display.*;
import flash.text.*;
import flash.external.*;
import adobe.utils.MMExecute;
var str:String=new String();
str='fl.trace("Working..");';
MMExecute(str);
Please help me out.
Thanks in advance.
I'm not a real JS programmer, just an artist who got into JSFL, but:
var str:String=new String();
seems odd to me. I don't typically declare var types in JSFL.
(no idea if that's common or I'm just sloppy.)
I would typically just write
var str='fl.trace("Working..");';
it is also possible you may need to escape the first semicolon.

Resources