I have searched far and wide for the answer to this and am surprised that there aren't more people talking about this.
There doesn't appear to be a way to ensure that the stream/track that is selected is a native camera versus a stream provided by something like ManyCam (https://manycam.com/?__c=1) or AlterCam. The use case for this is secure apps that, for security reasons, want to ensure that the video/images/data coming through the stream comes from a "legitimate" (native camera) source rather than a source that can really be anything and/or altered before it gets to the client code.
Is there any known way of ensuring this? I'm not looking for a whitelist/blacklist option, since many of the camera programs allow changing of the camera name, which ends up showing up in the label property.
Related
I'm working on the sound part of an interactive installation that would need an event to be triggered by osc an undefined number of times, making the sound linked to it overlaps instead of being rewinded and started again.
Would it be possible to do that without needing to make an array of loadings of the same sound?
I'm actually trying to do it with processing and minim library.
Do you think it would be easier to achieve it with another programming software? I've found myself in the same difficulties trying to do it with puredata. Any tip or clue would be extremely welcome.
Thanks a lot.
You will need multiple readers ([tabread~] resp [tabplay~] in Pd; i don't know about Processing/minim, but the same principle applies) to read the table multiple times (in parallel), where each one can be started separately.
However, you only need a single instance of your data array (e.g. [table]), as the various readers can access the same array independently.
Can you use Java libraries in Processing? Processing is built on Java, yes?
If you can, I have a library you can use, supporting a class I call AudioCue available via github. This is modeled on a Java Clip but with additional capabilities. It allows multiple, concurrent playback. AudioCue also has real time controls for volume, panning and playback speed, in case you want to play around with adding some more interactivity to your installation.
I would love to know if it can be used with Processing. Please follow up with me if you try this route. I'd like to see it done, and can possibly assist.
If Processing allows you to send PCM directly out for playback, then the basic algorithm is the store the audio data in an array, and create pointers or cursors (depending on your preferred terminology) that independently iterate through that array. This is the main basis of the algorithm I use for AudioCue, with the PCM being routed out via a Java SourceDataLine.
I'm aware of the existence of DDS files which allow programming of display graphics on the as/400, but is there another way?
Specifically, what I want to do is manipulate the terminal buffer directly to be able to display anything else than just text.
For example, the terminal looks like that:
Let's say, in memory, there would be a two dimensional char array: text[20][80] for the text menu and lower than that, there would be a pixel buffer array of size [200][800].
Is there a way to access either of those arrays directly?
I would like to be able to create a displayable menu entirely in C without the need of a display file and also display other kind of graphics (images) directly in the pixel buffer.
Is there a way to access either of those arrays directly?
That's easy enough, though a "display file" that has no formatted fields will still be needed. The 'file' will be the connection between the program and the physical device (or the emulator). You can define a single large area that contains whatever "text" you want your program to put into it. This can even include display field attributes that delimit input areas.
For the most control, the DDS USRDFN keyword is appropriate. But for simple stuff like lists of menu items, almost any large text field can be output to.
Outputting simple text is easy. For detailed stuff like USRDFN formatting, detailed understanding of the 5250 protocol is needed.
One kind of alternative would be to use User Interface Manager (UIM) APIs to update a PANEL's "text area" (:TEXT) via its USREXIT= application program. The UIM handles everything as far as any "display file" definition and actual I/O goes. The UIM can be thought of as a HTML interface for 5250 and uses a very similar markup language to define PANELs.
Another alternative is the Dynamic Screen Manager (DSM) APIs. These give much finer control than the UIM or DDS methods (though DDS USRDFN gets very close). But as with USRDFN, actual device control will require 5250 protocol knowledge.
...and also display other kind of graphics (images) directly in the
pixel buffer.
There is no "pixel buffer" for 5250 nor even 'pixels'. It's a character-based protocol, like telnet. If you're going for images or 'pixels', you're into browser interfaces, or perhaps Java and NAWT, or X-windows, etc.
Now, granted that with TCP/IP and sockets, you can do essentially anything that you're able to program. Whatever you can figure out how to do, including downloading/installing 3rd-party code libraries, you can do -- within the network restrictions surrounding your server. But it is in fact a server, so GUI kinds of apps generally shouldn't run on it. That's the same as for almost all types of servers. Code the GUI on the client system rather than the server. But you can do it if you really want to.
I'm not sure why you'd want to do this...
Now-a-days, it'd be much easier to simply generate your output as HTML and serve it up via the integrated apache web server.
But if you really want to do graphics via 5250, it can be done...theoretically at least. In 20+ years on the platform, I've never seen it.
But way back when (1994?), IBM added support for Graphical Data Display Manager (GDDM) and Presentation Graphics APIs into OS/400. "GDDM is a means of
displaying, printing, or plotting pictures. Presentation Graphics routines are a
means of displaying, printing, or plotting business charts."
The support is still in the OS. However, client side support is NOT available in IBM i Access for Windows or the most recently released client, IBM Access Client Solutions (ACS). It appears that the standalone IBM Personal Communications product may support GDDM.
For complete control of the character buffer, take a look at the Dynamic Screen Manager (DSM) APIs. The DSM APIs are "a set of screen I/O interfaces that provide a dynamic way to create and manage screens for the Integrated Language Environment® (ILE) high-level languages. Because the DSM interfaces are bindable, they are accessible to ILE programs only."
There is a way to do it in ILE C/C++. This was very fun to investigate since I haven't tried it myself.
The only documentation on it (page 183+) I could find is from 5.1, but you are able to cross reference the functions used to this 7.3 manual (possibly page vii/7) to see if they're still used the same.
Hope this helped!
I'm looking for a program that is able to recognize individual audio samples from my computer and reroute them to trigger WAV files from a library. In my project, it would need to be realtime as the latency would not be a desired result. I tried using dictation software that would recognize words to trigger opening a file and that's the direction where I want to go, but instead of words I want it to be sounds and it would happen in realtime. I'm not sure where to go and am just looking for some guidance. Does anyone have any suggestions of what I should do?
That's a fairly broad question, but I can tell you how I would do it. (Hardly the only way, but where I would start.)
If you're looking for real time input, the Java Sound library (excellent tutorial here) allows for that. (Just note that microphone input from a web page is difficult on anything, due to major security concerns, so this would be a desktop application.)
If it needs to be real time, the first thing I would suggest is stream and multithread the hell out of it. I would suggest the Java 8 Stream API, but since you're looking for subsamples that match a specific pattern, then each data point will have to be aware of the state of its neighbors, and that isn't easy with streams.
You will probably want to know if a sound roughly resembles an audio profile, so for that, I would pick a tolerance on just how close you want it to be for a match (remembering that samples may not line up 100% anyway, so "exact" is not an option), and then look up Hidden Markov Models. I suggest these because they're what voice recognition software typically uses, and while your sounds may not be voices, it will give you an idea of what has already been done.
You'll also want to maintain a limited list of audio samples in memory. Specifically, you will likely need the most recent data, because an audio signal is a time-variant signal, and you can't get a match from just one point. I wouldn't make it much longer than the longest sample you're looking to recognize, as audio takes up a boatload of memory.
Lastly (for audio), I would recommend picking a standard format for comparison. Make it as good as gets you decent results, and start high. You will want to convert everything to that format before you compare it.
Once you recognize a specific sound, it's basically a Command Pattern. Specific sounds can be mapped, even with a java.util.HashMap, to specific files, which (if there are few enough) you might even have pre-loaded.
Lastly, it's worth looking at the Java Speech API. It's not part of the JDK and it's quite dated, but you might get some good advice from its implementation.
This is of course the advice of a Java-preferring programmer, but I imagine that there might be some decent libraries in Python and Ruby to help you as well; and of course there's something in C somewhere. This may sound like a lot, but most of the material is already implemented and ready-to-go.
Hopefully this helps, let's look forward to other answers.
This is one of my assignments and I need some help in getting started. The basic idea behind the assignment is that I have to design a self destructible email program that is capable of destructing the message after (n) time duration.
Speaking about self destructible emails, there are quite a few ones on the internet offering the same service. But what they do is, they just convert the email message into an image and store them on their servers. Now, they send the message attaching the image inline with it. After they receive a hit on that image (which means that the message was being opened), they simply delete the image and the inline image link breaks! BOOM!
IMO, that's not what a self destructing email should be like. Nevertheless, in my case, I have to take care of following points:
I have to do it for TEXT. No image, nothing else.
I have to assume that the systems used throughout the process will be UNIX based (I don't know how that is going to make a difference).
There are also some hints regarding the usage of various network layers in solving the problem.
This isn't supposed to be done "in general". What I mean by that is, I have to do that ONLY for one/two UNIX systems. Let me put it this way, all I have is two UNIX systems and nothing else. Now I want to create a program (in UNIX itself) that would do that self-destructing thing. I have total control of protocols and the network layers and I have to code anything and everything required at any level.
This is more geared towards the StackOverflow side of things but I have no problem getting you started.
The first thing I'd like to point out is that you seem to be heavily over-analyzing this. The services that have self-destructing e-mails which are image based are simply deleting a file after it is viewed. All you need to do differently is put that text in a file and get it's contents before deleting it. This fits well with the UNIX philosophy since so many programs already make use of flat files.
The part you seem to have left out is how you are building this. You describe it as an e-mail program and then talk about web services. Is this a web-based project or a program you are designing for Linux? Do you have to code everything from scratch or can you parse output from Linux utilities to grab the mail? These kinds of things really would simplify the process.
I want to code a trading bot for Magic: The Gathering Online. This bot should wait until someone offers to trade, accept, look through the cards available from the other trader (the information is shown on screen), and perform other similar functions. I have several questions:
How can it know that someone is offering a trade?
How can it know that the other trader has some card (the informaion is stored in pictures)?
I just cannot imagine right now how to do it, I have no experience with it, until now I've been coding only console programs for my physics neсessities.
First, you should note that some online games forbid bots, as they can give certain players unfair advantages. The MTGO Terms of Service do not seem to say anything about this, though they do put restrictions on anything that might negatively impact the service. They have also said that there is a possibility they will add an API in the future, so they don't seem to be against the idea of automation, but are not supporting it at the moment. Tread carefully here, but it looks like it should be OK to write a bot as long as it is not harmful or abusive. This is not legal advice, and it would be a good idea to ask the folks who run MTGO for permission. edit since I wrote this, it has been pointed out that there are lots of bots already, so there should be no problems writing bots.
Assuming that it is not forbidden by the terms of service, but they do not have an API, you will have to find a way to detect what's going on, and control the game automatically. There's a pretty good series of articles on writing poker bots (archived copy), which has some good information on how to inject a DLL into an application, scrape the screen, and control the application. That might provide you with a starting point for doing this sort of thing.
You might also want to look for tools that other people have already written for doing this. It looks like there are several existing MTGO bots, but they all seem a bit sketchy (there have been some reports of them stealing passwords), so be careful there.
Edit
Since this answer still seems to be getting upvotes, I should probably update it with some more useful information. Since writing this, I have found a great UI automation system called Sikuli. It allows you to write programs in Python that automate a GUI. It includes image recognition features which make it very easy to recognize buttons, cards, and other UI elements; you just take a screenshot, crop it down to include just the thing you're interested in, and do fuzzy image matching (so that changing backgrounds and the like doesn't cause the match to fail). It even includes a custom IDE that allows you to embed those screenshots directly in your source code, so you can see exactly what the code is looking for. Here's an example from the documentation (apologies for the code formatting, doing images inline in code is not easy given StackOverflow's restricted subset of HTML):
def resizeApp(app, dx, dy):
switchApp(app)
corner = find(Pattern().targetOffset(3,14))
drop_point = corner.getTarget().offset(dx, dy)
dragDrop(corner, drop_point)
resizeApp("Safari", 50, 50)
This is much easier to get started with than the techniques mentioned in the article linked above, of injecting a DLL into the process you are debugging. Sikuli runs entirely at the UI level, so you never have to modify the program you are automating or worry about changes to the internals breaking your script.
One thing it is a bit poor at is handling text; it has OCR features, but they aren't all that good. If the text is selectable, however, you can select the text, copy it, and then look directly at the clipboard.
If I were to write a bot to automate something without a good API or text-based interface, Sikuli is probably the first tool I would reach for.
This answer is constructed from my comments.
What you are trying to do is hard, any way you try and do it.
Arguably the easiest way to do it is to totally mimic the user. So the application presses buttons, moves the mouse etc. The downside with this is that it is dependant on being able to recognise the screen.
This is easier if you can alter the games files as you can then just skin ( changing the image (texture)) the required cards to a single unique colour.
The major down side is you have to have the game as the top level window or have the game running in a virtual machine. Neither of which is ideal.
Another method is to read the processes memory. You may be able to find a list of memory locations, which would make things simpler, otherwise it involves a lot of hardwork, a debugger to deduce the memory addresses. It also helps (a lot) to be able to understand assembly.
The third method is to intercept the packets, and alter them. This is easier that the method above as it (at least for me) is easier to reverse engine the protocol as you have less information to deal with. It is just a matter of setting up a packet sniffer and preforming a action with one variable different (for example, the card) and comparing the differences.
The thing you need to check are that you are not breaking the EULA. I don't know how the game works, but most of the games I have come across have a EULA that prohibits (i.e. You get banned) doing any of the things I have mentioned.