When i try to take a screenshot of my desktop I found the area of the Windows Media Player window was empty, nothing in it, I google for it for a while and found that most of video players user Overlay surfaces for performance, and overlay surfaces can not be caputured, so some ideas come out said to disable the DDraw accelaration so that you can grap an still image from a live video, but when the player was launched, it's already use the hardware accelaration, even i disable hardware accelaration, it will not take effect until i relaunch the player, my question is: how to capture a image from a live video without diasble the ddraw accelaration? or how to make the settings(disable hardware accelaration) work work without relaunch the video player?
I won't play the vedio with my program, i just want to take a still
image while it is played by a 3rd party player such as Windows Media
Player or Real player etc...
I want to do this programatically, say
by C/C++ and DirectX, so I don't want to use any exsisting software
or tools
No matter which player in use, my program should capture it, I know some tool can do this like CapTrue and tencent qq, so i think it is possible to do so.
A workaround can be to use vlc to play your file. It gives a screenshot option in it directly.
AFAIK, this is an intentional "feature" in WMP, for protection. If you need to have WMP, then you need a decent screengrabber. Unfortunately, the ones I know like hypersnap are not free.
If you only want a screengrab of a frame, VLC is your friend, like #zdd said.
Related
1. The problem I've encountered
Hi, I'm currently making a desktop application with Electron.js. Meanwhile, I have needed a feature of taking a screenshot (including the mouse cursor) but this is a problem for me because I do not know how to do this.
I think the reason for me not to be able to solve this problem is that I have no knowledge about operating systems. I think the meaning of "taking a screenshot" is "getting the image data displayed on the computer monitor", but how I can access to that?
2. What I've tried or considered
At first I tried Electron.BrowserWindow.capturePage() but its result didn't meet my want. It is because of two reasons: 1) My application has a transparent background and wherever area of transparency becomes black if I take a screenshot. 2) Mouse cursor is not captured together.
Meanwhile, I am aware of the existence of some APIs such as Screen Capture API and Media Capture and Streams API (in web browsers) and perhaps I can give it a try because I'm using Electron.js and Electron.js uses Chromium web browser and web browsers have implementations of those APIs.
However, it is still a problem that what those APIs handle is media streams (= video), which is not suitable for my case. Of course I think it is possible to take only one frame(?) out of a media stream somehow, but I think it is an overwork, given that what I desire for is just a single screenshot.
Meanwhile, because Electron.js also uses Node.js, I think it is also somehow possible to call Windows API (maybe via Foreign Function Interface?) or to invoke child_process.exec() in order to take a screenshot.
3. The question I would like to ask
How can I access to the monitor image data? So that I can implement "the screenshot feature which meets my requirements--see-through & mouse cursor" (if uses of third-party libraries needed, as least as possible).
What calculates a final image data which is going to be displayed on my computer monitor? It seems that it is a work of my graphics card because my monitor and graphics card are connected each other with a cable.
4. Miscellaneous curiosities (not much related to the question)
...Yet it is another curiosity that how, why, and where the transparent area is processed as #000000 color.
Meanwhile it is also interesting that there are some programs which do not allow me to take a screenshot of contents on them--the area where the programs are located looks black. How could the developers of this kind of programs implement this?
Thank you for reading my question.
After some internet searches, I found it difficult to access and get display data (specifically, video ram data from my graphics card). So I decided to use a workaround--It is a well-known aphorism that 'all loads lead to Rome'.
Which means,
See-through screenshots can be achieved by either "using native screenshot feature (the PrintScreen key)" or "using some scripts that take a picture of the entire screen".
Screenshots with mouse cursor can be achieved by adding (= overlaying) mouse cursor image at the coordinates where my mouse cursor is located at.
However, in my case I do not actually need to save screenshots as files, so I think it is enough to just draw a custom mouse cursor image, hide the original mouse cursor image, make it follow the mouse cursor, and take a screenshot with a manual key press. (I think it is also a feasible option to take a screenshot with the PrintScreen key press, get the screenshot data from the clipboard and do some image processings like adding effects relevant to a mouse cursor.
※ I saw a code that simulates "key press" (SendKey()) in order to take a screenshot and I think this is a good approach because of no manual key press needed.
I think whom interested in this topic may find it helpful from the following links (the numerical order does not represent importance):
Keywords mentioned: GetDC(), BitBlt(), CAPTUREBLT flag, GDI
What is the best way to take screenshots of a Window with C++ in Windows?
How can I take a screenshot in a windows application?
Keywords mentioned: DirectX, buffer
Fastest method of screen capturing on Windows
How to save backbuffer to file in DirectX 10?
Keywords mentioned: mouse cursor, cursor image, hot spot
Capture screen shot with mouse cursor
C# - Capturing the Mouse cursor image
Python - Take screenshot including mouse cursor
Keywords mentioned: PowerShell, CopyFromScreen()
How can I do a screen capture in Windows PowerShell?
Capture screenshot of active window?
Q/A about accessing to video memory
DRM Access the whole video memory
raw video memory, video driver Access the whole video memory through OpenGL programming
graphics RAM API to get the graphics or video memory
direct data write to video memory
Direct video buffer access
How to write data directly into video memory?
Is direct video card access possible? (No API)
I want to create a game with sound effects. When I start the game, the background music should be played until the game is over. When I click on something in the game (such as buttons), a sound effect should be played but the background music is stopped.
How can I make the background music play continuously while the sound effect from object is playing?
I already have these scripts...
Card script...
on openCard
play "backgroundmusic.wav" looping
end openCard
Buttons (or any object)...
on mouseup
play "sound.wav"
end mouseup
How to play these sounds together?
Update: I found a game uploaded to Game Jam. This game was ranked #1. When I play the game, the sound was amazing that it has background music and sound effects. But the owner of this game doesn't upload the livecode stack file in order to study it. The game was entitled Space Shooter Game. The sounds of this game is what I expect.
Note:
As what I figured out from the answers, using the player object can be work. But this requires QuickTime which I don't have that installed in my PC. I want also the sound to be able to play in mobile devices.
As it stands, the soundChannel property has no effect in LiveCode and is only provided for Hypercard compatibility.
Currently on desktop there are two ways to do multi-channel sound: 1) play imported sounds as one channel, and use a player object as the second channel, or 2) use two player objects.
Typically, a good option is to import short sounds as sound effects into a stack that only play once, and reserve the player object for background music. Imported sounds usually play with the least latency, however, you cannot play multiple imported simultaneously -- attempting to play a second sound while a first is playing will stop the first to play the second. If you have a need to play asynchronous sound effects, this option will not work; you must use a combination of playback options.
Multiple players can be used, but note that there can be some latency during the process of loading a sound (assigning a sound's filepath to a player) and playing it.
Also note that truly seamless playback of of a track is difficult if not impossible -- LiveCode will at some point become susceptible to some system event that will cause a slight pause between loops. A while back, Trevor Devore made an addition to his Enhanced QuickTime external that enabled true seamless looping of audio. However, with Apple getting rid QuickTime, it's unknown how much longer this option will be useful.
With the enhancements that the RunRev guys have been making to the engine, it's likely we'll see improvement with media playback and management, hopefully sooner rather than later.
In the LiveCode forums, they suggest using player objects on the card instead and telling them to play.
In HyperCard, you could set the soundChannel property for that. Have you checked in the LiveCode documentation whether it supports that? The docs for the play command and the the sound property might also help. Maybe those contain hints. FWIW, in HC
set the soundChannel to 1
play "BackgroundMusic"
set the soundChannel to 2
play "SoundEffect"
would play the sound effect and background music at the same time. Maybe that's how it works in LiveCode as well?
The multimedia capabilities are a going through a transformation. Previously everything was built around QuickTime (well almost everything) and you needed to add a player control for each concurrent sound. Currently the whole foundation is changed as Apple dropped QuickTime, but assuming you develop for desktop you should still (again) be able to add a player object and then use:
start player "name of player"
You can also create player object dynamically by
create player "my player""
and then use
set the filename of player "my player to "/path/to/your/audio/file"
before staring your sound. And as long as you have different players for your different sounds they should play simultaneously.
on openCard
put specialFolderPath("engine") & "/soundfx/backgroundmusic.wav" into tSound
mobilePlaySoundOnChannel tSound, "Background", "looping"
end openCard
on mouseup
play "sound.wav"
end mouseup
I'm using VirtualDub version 1.9.11 to screen capture video game play on my computer. It works amazing for video; however, I can't get my audio to record.
My motherboard is a Gigabyte ga-z77x-ud5h. And I have downloaded the latest audio drivers and even tried older drivers.
Here is an image of what my Sound options in VirtualDub SHOULD resemble. This comes from this VirtualDub tutorial http://www.genadmission.com/vdubguide.html
Here is what my inputs look like, none...
And here are what my sources look like, none...
Any clues on why I have no sources and no inputs? If I plug in a microphone I can get mic input, but that's it.
I learned about uisng VirtualDub from this video tutorial http://www.youtube.com/watch?v=fvfPXn5VQ0w
Solved. Had to enable Stereo mix which was disabled by defualt... why on earth would Windows 7 disable that by default is beyond me.
Seen on this video http://www.youtube.com/watch?v=mjQ_qS-LaoU
I've run DIMH and SFC to see if there are any errors on the system. SFC said a couple of directories had dual owners and corrected that, but it didn't help.
I tried re-connecting the Pinnacle device, thinking that giving the system a choice of more than one device might fix something. VirtualDub sees both video capture devices and will use either one: but it still does not give me a choice of audio input devices.
However, it will let me take the video from the FHD HDMI input and sound from the Pinnacle device, so I have a work-around. It's stupid, and I'd really like to get this working properly, but at least I can use this solution.
I am working on a site, which airs ads before the real video plays.
The business requirement is that the ads should play before the video plays.
I am Using watir for testing. can you help me in this regard.
Thanks.
You may want to investigate Sikuli I've seen other threads where people were using it in combination with watir to work with things like flash. However, since it works based on visual recognition, I expect it would not work at all with video (a changing image that might only be 'right' for a fraction of a second) while it is playing unless there is some aspect of the screen that is relatively static that could be used to know the video play is in progress. See this blog posting for more info
I was wondering if there was a tool similar to jCrop, with the exception that instead of an image I'd allow the user to crop an audio file? Google didn't give me any useful results sadly :(
The reason why I'm asking is that I'm making a tool to convert audio files to popular ringtone formats, and only letting the user specify the offsets in numbers is somewhat inconvenient. Obviously the tool doesn't have to be in javascript - anything that fits into a website is ok.
Here's a browser-based audio editor written in Flash that you could probably adapt (it supports cropping):
http://www.hisschemoller.com/2010/audio-editor-1-0/
One thing I found a bit confusing is that you have to hold down the play button on the editor to play the full sound.