Samsung Fame GT S6812 Call Recorder Not working - samsung-mobile

I am using samsung fame, this phone actually without java.
I want to record my calls, for that I used almost all applications in google play. But calls are not recording properly.
Its recording like noice. example if u ever fast forward song in type-recorder how that sound to here same like that calls are recording.
Is there any solution/app. I need to record important calls.
Please give me the solution.
Note: Jelly Bean 4.1.2, No java.
Few apps are showing message something like call recording not started, Java not started.

Related

Ableton Live 11 Lite: Lagging or distorted delay of recording

I'm using Ableton Live 11 Lite while recording a cover, and discovered that all of my recording comes out as a jittery mess all of the sudden. I'm using a Focusrite Scarlett Solo, a Squier Telecaster, Maestro Ranger Over-Drive pedal, and an Audio-Technica Mic. It's a new project, I have a metronome set up for recording Privately Owned Spiral Galaxy, and the audio interface splits the mic and guitar into separate tracks that I'm trying to record individually. I posted a sample of the God-awful sound of it recording here for reference.
P.S. I know this isn't the correct page but there's hardly any Ableton/DAW questions on the Music page and no answers really, as opposed to the several I found on SO.
I would start with playing with your buffer size in your Audio driver.

Detecting when an Apple TV 4th generation has woken from sleep

I'm working on some home automation programs and one of the things I want to be able to do is detect when my 4th generation Apple TV has woken from sleep. This will generally only ever happen when someone pressed a button on its Siri remote to wake it up.
I have a PC (connected to the same TV as the Apple TV) that has a Pulse-Eight USB-CEC adapter, so naturally the first thing I tried was using CEC to determine when the Apple TV is awake. Unfortunately it's not reliable, since monitoring the Apple TV's power status to see when it wakes up produces false positives. (I should note that I do not have "Control TVs and Receivers" enabled on the Apple TV, and can't turn it on for the particular project I'm working on because I need the Apple TV to not change the TV's input.)
I'm trying to think of some other way to do this. I'm open to any possibilities, including things like:
Making use of private APIs on the Apple TV
Running an 'always on' program in the background of the Apple TV that sends a signal when the Apple TV wakes up, if that's even possible. (I suspect that it isn't.)
Monitoring the bluetooth communication between the Siri Remote and the Apple TV, if that's possible
Somehow filtering HDMI-CEC commands so that I can turn on 'Control TVs and Receivers', allow the Apple TV's CEC commands for turning on and off the TV, and exclude commands for changing the TV's input.
Any other method, no matter how hacky or ridiculous, as long as it works!
Does anyone have any suggestions? I'm running out of things to try!
I tried to post below on apple discussion / support communities but was told i don't have the right to post this content. Maybe someone in this group can succeed in doing it:
Apple TV 4 CEC integration is great when it works, but it doesn't work all the time and not with all the various equipment out there, you can do a search across forums and you will see lots of unhappy users. I would like to use a raspberry PI to detect when my AppleTV goes to sleep and wakes up and programmatically turn my tv on or off using its RS232C or custom CEC commands.
I used a bonjour services explorer and compared every single result between sleep and on states and there are no differences whatsoever.  I would have expected Apple to welcome such automation projects and make this information readily available with a variable such as status: sleep or status: on. 
Is there a way I could tell the two states apart via the network connection?
If not, could one build a TvOS app which runs on the background and makes this information available to clients somehow?
I finally found a method that seems to work consistently. This method is incredibly hacky and not at all the sort of way I'd prefer to do this, but it's the only one I've found so far that works consistently.
I have taken an old USB webcam and affixed it to the front of my Apple TV so that its lens is directly in front of the Apply TV's front facing light. Whenever the Apple TV is asleep, I simply check for the light turning on by taking images from the camera and analyzing their average luminosity. Since the lens is right next to the light, when it turns on it'll create a huge blown out white circle in the image that's incredibly easy to detect.
As long as the Apple TV is asleep, the light turning on seems to indicate 100% of the time that it has woken up. I have yet to find a single incident of either a false positive or false negative.
Since pressing buttons on the Siri remote causes this light to blink, this also means that I can detect buttons being pressed by looking for changes in the light while the Apple TV is awake. It's not 100% accurate, since some button presses are faster than the frame rate of my crappy old USB webcam, but it works well enough.
I would vastly prefer to find a better method of doing this, like making a request over the LAN to the Apple TV where the response clearly indicates it being awake or asleep, but so far it doesn't look like that's possible.
Here I am, six and a half years later, and I've finally found a better way to get the power state of my Apple TV.
I can simply use pyatv, which has a function named power_state that returns the Apple TV's current power state.

Taking input and redirecting it to an emulated rom (Twitch Plays Pokemon)

So if anyone has been following Twitch Plays Pokemon for the last week or so (http://www.twitch.tv/twitchplayspokemon) you'll know what I'm talking about. They are streaming an emulated version of Pokemon Red and allowing members to type controls into the chat. The controls they type correspond to that on an actual gameboy and are somehow 'sent' to the emulator as controls. For example, if someone types 'start' it pops up the start menu in the game.
Is there any documentation online which could show me how to do something like this (albeit smaller scale)?
Thanks!
It's actually quite simple once you get the hang of emulating keystrokes.
On windows you can use the keybd_event WinAPI to simulate keystrokes, here's some example c++ code that holds down the A key for 150 milliseconds:
keybd_event(0x41, 0, 0, 0); // starts holding down key 0x41 (A)
Sleep(150);
keybd_event(0x41, 0, KEYEVENTF_KEYUP, 0); // releases key 0x41 (A)
(you can find values for other keys here)
Once you get keystroke emulation working, you just need to either make your software connect to the IRC channel of your twitch chat or run HexChat or any other IRC client, make it connect to the twitch chat by following this guide, enabling logging and parsing the chat log on your software by simply reading the file line by line and waiting for new lines once you get to the end of it.
I wrote my own Twitch Plays ... software on Windows in c# in a matter of minutes and then released a polished, configurable version that should work on any game here.

Why is my Clojure project slow on Raspberry Pi?

I've been writing a simple Clojure framework for playing music (and later some other stuff) for my Raspberry Pi. The program parses a given music directory for songs and then starts listening for control commands (such as start, stop, next song) via a TCP interface.
The code is available via GitHub:
https://github.com/jvnn/raspi-framework
The current version works just fine on my laptop, it starts playing music (using the JLayer library) when instructed to, changes songs, and stops just as it should. The uberjar takes a few seconds to start on the laptop as well, but when I try to run it on the Raspberry Pi, things get insanely slow.
Just starting up the program so that all classes are loaded and the actual program code starts executing takes way over a minute. I tried to run it with the -verbose:class switch, and it seems the jvm spends the whole time just loading tons of classes (for Clojure and everything else).
When the program finally starts, it does react to the commands given, but the playback is very laggy. There is a short sub-second sound, then a pause for almost a second, then another sound, another pause etc... So the program is trying to play something but it just can't do it fast enough. CPU usage is somewhere close to 98%.
Now, having an Android phone and all, I'm sure Java can be executed on such hardware well enough to play some mp3 files without any troubles. And I know that JLayer (or parts of it) is used in the gdx game development framework (that also runs on Android) so it shouldn't be a problem either.
So everything points in me being the problem. Is there something I can do either with leiningen (aot is already enabled for all files), the Raspberry Pi, or my code that could make things faster?
Thanks for your time!
UPDATE:
I made a tiny test case to rule out some possibilities and the problems still exist with the following Clojure code:
(ns test.core
(:import [javazoom.jl.player.advanced AdvancedPlayer])
(:gen-class))
(defn -main
[]
(let [filename "/path/to/a/music/file.mp3"
fis (java.io.FileInputStream. filename)
bis (java.io.BufferedInputStream. fis)
player (AdvancedPlayer. bis)]
(doto player (.play) (.close))))
The project.clj:
(defproject test "0.0.1-SNAPSHOT"
:description "FIXME: write description"
:dependencies [[org.clojure/clojure "1.5.1"]
[javazoom/jlayer "1.0.1"]]
:javac-options ["-target" "1.6" "-source" "1.6" "-Xlint:-options"]
:aot :all
:main test.core)
So, no core.async and no threading. The playback did get a bit smoother, but it's still about 200ms music and 200ms pause.
Most obvious to me is that you have a lot of un-hinted interop code, leading to very expensive runtime reflection. Try running lein check (I think that's built in, but maybe you need a plugin) and fixing the reflection issues it points out.

low latency sounds on key presses

I am trying to write an application(I'm a gui first timer) for my son, he has autism. There is a video player in the top half and a text entry area in the bottom. When letters are typed sounds are produced to mimic the words in the video.
There have been other posts on this site in regard to playing sounds on key presses, using gstreamer as a system call. I have also tried libcanberra but both seem to have significant delays between sounds. I can write the app in python or C but will likely do at least some of it in C.
I also want to mention that the video portion is being played by gstreamer. I tried to create two instances of gstreamer, to avoid expensive system calls but the audio instance seemed to kill the app when called.
If anyone has any tips on creating faster responding sounds I would really appreciate it.
You can upload a raw audio sample directly to PulseAudio so there will be no decoding and (perhaps save) extra switches by using the following function from Canberra:
http://developer.gnome.org/libcanberra/unstable/libcanberra-canberra.html#ca-context-cache
The next ca_context_play() will use it.
However, the biggest problem you'll encounter with this scenario (with simultaneous video playback) is that the audio device might be configured with large latency with PulseAudio (up to 1/2s or more for normal playback). It may be reasonable to file a bug to libcanberra to support a LOW_LATENCY flag, as it currently doesn't attempt to minimize delay for sound events afaik. That would be great to have.
GStreamer pulsesink could probably get low latency too (it has some properties for that), but I am afraid it won't be as lightweight as libcanberra, and you won't be able to cache a sample for instance. Ideally, GStreamer could also learn to cache samples, or pre-fill PulseAudio...

Resources