I am having trouble with playing a sound in AndroidStudio. My .wav file was higlighted in red and AndroidStudio said something about how .wav files aren't recognised so I chose to interperate it as a Text fille. I think that is the issue and I dont know what to do.
public void playSoundADROID(View view){
Log.i("Play sound", "Hello World");
MediaPlayer sound = MediaPlayer.create(this, R.raw.gsharp4);
sound.start();
}
.wav file img
.wav text file
I deleted the .wav from File name patterns but it still isn't working
Settings
Related
I've been using the Xamarin libVLCSharp player for a while on iOS and Android and it works well. I've added a transcoding feature, but for some reason my libVLCSharp code works fine to transcode on Android (change codec, bitrate, fps), but the same transcode media options on iOS (iPhone 12 pro, iOS 14.4.2) produce no changes in the output video file compared to the input video file. I have attached the sample video file which I am attempting to transcode.
Here is my C# code (which contains the transcode media options that works on Android, but not iOS):
Core.Initialize();
_vlc = new LibVLC();
_player = new LibVLCSharp.Shared.MediaPlayer(_vlc);
_media = new Media(_vlc, _source, FromType.FromPath);
_media.AddOption ($":sout=#transcode{{vcodec=h264,vb=700000,acodec=aac,ab=96,channels=2,samplerate=44100,scodec=none}}:std{{access=file,mux=ts,dst={destination}}}");
var result = _player.Play(_media);
Any help or pointers on why this transcoding command is not producing any change in the output media file compared to the original would be greatly appreciated!
iOS Logfile
https://drive.google.com/file/d/1C0UGzE7hqCJYozTvubyK1BKWcZvG18VY/view
Sample file I'm using to test transcoding:
https://drive.google.com/file/d/1MvJKJnu2ii6XMANuWRkHLqmu78e9x03q/view
Try with
$":sout=#transcode{venc={module=avcodec{codec=h264_videotoolbox}, vcodec=h264},venc={module=vpx{quality-mode=2},vcodec=VP80},samplerate=44100,soverlay}:file{dst={destination},mux=ts}"
Feel free to remove unnecessary options, but from my findings this exact string should work.
I recently started playing with Processing. I want to create a simple FFT visualizer that will import music file using Sound library. Here's my code and the console output.
Console Output
import processing.sound.*;
SoundFile file;
void setup()
{
size();
background(51);
file = new SoundFile(this, "song.mp3");
file.play();
}
void draw ()
{
}
Can someone explain why is this happening and how it can be fixed?
By the way, sound file (song.mp3) is located in the same folder as the .pde file.
Put the mp3 file in a folder called data which should be located where your .pde file is located.
This might not fix your issue though. If the issue persists, then it's the fault of the SoundFile library and there is nothing you can currently do.
I have heard people recommending the "minim" library. Try to look into that, as continuing to use the SoundFile library will only lead to problems.
I had the same problem when trying to load mono mp3s. Changed them to stereo and they worked.
I want to convert any audio file (flac, wav,...) to mp3 with python
I am a noob , I tried pydub but I didn't found out how to make ffmpeg work with it, and If I'm right it can't convert flac file.
The idea of my project is to :
Make musicBee send the path of the 'now playing' track (by pressing the assigned shortcut) to my python file which would convert the music if it is not in mp3 and send it to a folder. (Everything in background so I don't have to leave what I'm doing to make the operation)
You can use the following the code:
from pydub import AudioSegment
wav_audio = AudioSegment.from_file("audio.wav", format="wav")
raw_audio = AudioSegment.from_file("audio.wav", format="raw",
frame_rate=44100, channels=2, sample_width=2)
wav_audio.export("audio1.mp3", format="mp3")
raw_audio.export("audio2.mp3", format="mp3")
You can also look here for more options.
flac_audio = AudioSegment.from_file("sample.flac", "flac")
flac_audio.export("sampleMp3.mp3", format="mp3")
I have an application for iPAD.
This application records the voice of the microphone.
The audio formats of the item must be PCM, MP3 and WAV files. The MP3 file I get it starting from the original raw file and then convert using LAME.
Unfortunately I have not found any example that allows me to convert a PCM file to a WAV file.
I just noticed that if I put the file extension to WAV format, starting from the raw application saves without problems, so I think that there is no type conversion from PCM WAV files.
Correct?
PS: Sorry for my english ... I use Google Translate
WAV is some kind of a box. PCM is in the box. There are many container formats like MP4. MP4 can contain audio, video or both. It can also contain multiple video or audio streams. Or zip files. Zip files can contain text files. But zip files can also contain images, pdfs,... But you can't say "how can I convert a zip file to the text file inside the zip".
If you want to convert PCM data to a WAVE file you should not many problems because WAV files are quite simple files. Take a look at this:
(See also WAVE PCM soundfile format.)
You first need that header and after you can just append all your pcm data (see the data field).
Converting PCM to WAV isn't too hard. PCM and WAV both format contains raw PCM data, the only difference is their header(wav contains a header where pcm doesn't). So if you just add wav header then it will do the tricks. Just get the PCM data and add the wav header on top of the PCM data. To add wav header with PCM data, check this link.
I was working on a system where it accepts only wav files, but the one I was receiving from amazon Polly was pcm, so finally did this and got my issue resolved. Hope it helps someone. This is an example of nodejs.
// https://github.com/TooTallNate/node-wav
const FileWriter = require('wav').FileWriter
let audioStream = bufferToStream(res.AudioStream);
var outputFileStream = new FileWriter(`${outputFileFolder}/wav/${outputFileName}.wav`, {
sampleRate: 8000,
channels: 1
});
audioStream.pipe(outputFileStream);
function bufferToStream(binary) {
const readableInstanceStream = new Stream.Readable({
read() {
this.push(binary);
this.push(null);
}
});
return readableInstanceStream;
}
I have an application that shows a video stream from a camera, and is able to save the video stream on a file on request. When I run this command from the terminal I see the video in VLC, and the contents is saved on a file as expected:
vlc v4l:///dev/video0:norm=ntsc ':sout=#duplicate{dst=display{noaudio},dst="transcode{vcodec=wmv2,vb=800}:file{dst=aaa.wmv}"}'
However, when I save a file from my application the there are no time codes in the file, so when I open the file in another application I'm unable to move backwards or forwards in the file. I can also not see how long the file is.
Here is a simplified version of my code
factory = new MediaPlayerFactory();
mainframe = new JFrame("Video Viewer");
fullscreenStrategy = new DefaultFullScreenStrategy(mainframe);
Canvas canvas = new Canvas();
canvas.setBackground(Color.black);
EmbeddedMediaPlayer player= factory.newEmbeddedMediaPlayer(fullscreenStrategy);
mainframe.add(canvas);
player.setVideoSurface(factory.newVideoSurface(canvas));
...
String media = "v4l:///dev/video0:norm=ntsc";
String filename = "aaa.wmv";
String mediaoptions = ":sout=#duplicate{ dst=display,"+
" dst=\"transcode{vcodec=wmv2,vb=800}:"+
"file{dst="+filename+"}\"}");
player.prepareMedia(media, mediaoptions);
player.start();
aaa.wmv is created, but without time codes.
What can be wrong? The only difference I see from the command line version is that I use a Canvas widget with the EmbeddedMediaPlayer instead of the native VLC view window.
Never mind, I found the problem. For the time codes to be saved properly it is necessary to call player.release(). I did that, but before the release call I copied the file to another location. Since release hadn't been called yet the file was incomplete. When I changed the code to first call player.release(), then copy the file it worked as expected.