My task is to develop a simple (since it's a training task) video player that would be able to apply visual effects to the video being currently played. I thought Aforge could do the job perfectly, but the problem is that it seems not to be supporting audio at all.
I'm completely new to media processing, therefore any help would be appreciated.
Eventually, I found this tutorial to be a good starting point.
http://www.informikon.com/directshow-tutorials/a-jukebox-sample-in-c-version-2.html
Hope this helps someone who might be experiencing similar problem.
Related
I'm currently planning to build a website for training videos, but the material will be costly and I want to avoid users from freely downloading videos.
I've seen some of the questions asked on here already but they all seem quite old, so maybe things have changed. Is there a way to stop this from happening or a service you can pay for?
Playing a video is downloading a video. So no, you can’t stop somebody from downloading it, if you want them to play it. For control of how it’s played after it’s downloaded, you need DRM. Many live video platforms have DRM options.
I'm struggling in finding a working solution and I need aid.
I am working on a small IoT project where I want to abuse NFC tags.
I've succeeded in reading/writing in the open app but I wish to read while the app is closed.
More or less I just want to send a small UDP message when reading the appropriate NFC tag, which turns out is a bit more difficult doing in a background task.
The main headache is that I can't find a task trigger that runs upon NFC chatter. I've tried SmartCardsTrigger and ProximitySensorTrigger from the following sources:
https://msdn.microsoft.com/en-us/windows/uwp/devices-sensors/host-card-emulation
https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/ProximitySensor
The ProximitySensorTrigger seems like it's almost triggering at random, and kinda triggers less when I push the NFC tag against the phone. Maybe I'm doing something wrong.
The SmartCardsTrigger doesn't trigger at all. I guess the EmulatorNearFieldEntry trigger type is what I want but for some reason it's unsupported (?).
Anyhow, I am using a Lumia 920 running windows 10 mobile. To my knowledge it does not support smartcards but I just hoped it could use the same trigger for NFC tags.
Reading the responses on a similar question, Akash Chowdary suggested that it may be possible writing a custom trigger. If you know any tips that may point me in the right direction then please do tell them. I got the competence in researching but it's a big sea, and it would really help knowing where to start ^^.
I'm quite the noob when it comes to background tasks, and I am very confused of why after registering a SmartCardTrigger task I have no tasks running.
If I do for example a TimezoneChange trigger or a ProximitySensor trigger, the task is shown as it should. Maybe because my Lumia doesn't support the SmartCardTrigger? Would've guessed it to thrown an error if that's the case but what do I know.
Tl;dr: I want to read NFC tags in a background task, how do I do that on a Lumia 920 inside a basic UWP project?
I have a problem with Azure Media Services. It's configured to get stream from RTMP source, then encode it to multiple resolutions (pretty standard i think). But the problem is, that when the source stream ends (for example, powers goes down or internet disconnects) and I resume streaming it doesn't come back, so to speak.
The only thing that anyone using the player can see is the slate, that I've set up.
It happens with every piece of software that I could use, that is OBS, FLE, vMix.
Stream is published all the time, and I'm using DefaultProgram, but this happens anyway, doesn't matter if on Default or created manually.
If anyone has an idea what's going on, it would be greatly appreciated.
Unfortunately, if you disconnect the streaming ingesting, the current solution is to restart the channel.
I'm developing an app that will pull in static PNG visualizations of sound clips (30 seconds max). The images will then act as the background image of the player / scrubber in the UI.
I'm looking for APIs / tools that would support the processing and visualization of sound clips on the back-end, generating and saving a quality PNG. I thought Processing might be an option, but am not yet sure if it has these specific capabilities (it's also not really designed to be server-side). Any and all suggestions would be great.
Related - if anyone is an expert in this, and can give me insight into the type of data that can be extracted and visualized from sound, that would also be great. Though, I am hoping by identifying possible tools or APIs, that information will become more clear.
Thank you.
Claudia
I have recently installed Ubuntu Linux on my machine. This machine is on pretty much all of the time and houses my music collection. I have looked at various solutions for a central repository for music and each one is lacking in one way or another. I have done some .net programming and thought this would be an ideal project to try on MONO.
My question is this. What parts of an application are needed for streaming music around my house. I would like a web front end but other than that I'm not really sure of the parts needed to form this sort of application.
Any insight into this type of application are much appreciated!
No need to re-invent the wheel
http://en.wikipedia.org/wiki/Digital_Living_Network_Alliance
http://manuals.playstation.net/document/en/ps3/3_15/settings/connectdlna.html
http://www.obsessable.com/feature/home-media-streaming-101-dlna-explained/
Since you are running linux, I suggest you try http://mediatomb.cc/