getting crash while using NetInfo.isConnected in ios react native - react-native-ios

I am trying to read the networkstatechange in react-native ios code as :
componentDidMound(){
NetInfo.isConnected.addEventListener('connectionChange',this._handleNetworkStateChange);
}
componentWillUnmount(){
NetInfo.isConnected.removeEventListener('connectionChange',this._handleNetworkStateChange);
}
_handleNetworkStateChange = (isConnected) => {
this.setState({
netStatus : isConnected
})
}
1)When nework is Active :
while network state is changing from online to offline. App(ios) is directely crashing instead of reading state.
2)when network is inActive/offline initially :
while changing network state from offline to online, it works properly. shows and loads the view perfectly.
any help would be appreciated as i have gone through many tutorials but not getting the solution.Thanks in advance.

Related

How to implement idle timeout in Nativescript android application

I am building a Finacial application in Nativescript angular. I need some references like if the app is opened and running in the background after the set of some idle timeout it should redirect to the page which we specified. I couldn't get the proper reference in Nativescript can anyone please add a solution for the idle timeout. I have checked Nativescript extended activity but couldn't get properly
You should start by reading the documentation. What you need is Lifecycle hooks.
Read : https://docs.nativescript.org/angular/core-concepts/application-lifecycle#use-application-events
applicationOn(suspendEvent, (args: ApplicationEventData) => {
setTimeout(() => {
// do what you want after a certain amouont of time
// don't
}, 5000);
});
or
applicationOn(resumeEvent, (args: ApplicationEventData) => {
// compare current datetime with the last datetime saved in suspendEvent event
});
Don't forget your app can be force-closed, so you have to handle this depending on your needs

nodejs api returning nothing

I am trying to implement MEAN app for which I made node server
app.get("/posts",(req,res)=>{
posts=[{"title":"a","context":"b"},{"title":"c","context":"d"}]
res.send(posts); // tried even with res.status(200).json(posts)
});
when it checked with api tester it works well output snapshot
output snapshot with apitester
when i try to access with angular services
getposts()
{
var url='http://localhost:3000/posts';
this.http.get<post[]>(url).subscribe(data=>this.posts=data);
console.log(this.posts);
return this.posts;
}
when i do console.log(posts) it returning []
can someone please help i am struggling from last 2 days?
As you have not added the screenshot from the Browser's Inspector => Network Tab, I am guessing this answer will help you in case my guess is right.
Try setting the below in your front-end project (Angular in your case):
Try setting the below in your back-end project (Node in your case):
Also, verify your updates as in the below screengrab in the Browser's Inspector (Chrome in my case):
Please Note: - The mentioned response headers have special meaning, so kindly look for more info with a simple Google search. Use these settings during your development phase on localhost.

Google Cloud Platform - RecognitionAudio not set

since this morning, I'm receiving the following error from Google Cloud Platform when transcribing audio to text:
{ Error: RecognitionAudio not set.
at Operation._unpackResponse (/usr/src/app/node_modules/google-gax/build/src/longRunningCalls/longrunning.js:145:31)
at noCallbackPromise.currentCallPromise_.then.responses (/usr/src/app/node_modules/google-gax/build/src/longRunningCalls/longrunning.js:131:18)
at <anonymous> code: 3 }
Please note that I have not changed anything in my code, and this code was working perfectly before this. The code is as follows:
const client = new speech.SpeechClient();
client.longRunningRecognize({
config: {
encoding: "FLAC",
enableWordTimeOffsets: true,
languageCode: "en-US"
},
audio: {
uri: "gs://some-cloud-bucket/audiofile.flac"
}
})
As you can see, for RecognitionAudio I'm sending a Google cloud URI as described in their docs here: https://cloud.google.com/speech-to-text/docs/reference/rest/v1/RecognitionAudio
I have confirmed that both the bucket and audiofile exist. Keep in mind this was working fine yesterday.
I have no clue where to look in order to solve this error. Cloud status says their platforms are up and running and are having no issues.
Any of you guys experiencing the same problem? Or am I simply doing something wrong all of a sudden? E.g. using something deprecated that was patched today?
If anyone could point me in the right direction, that'd be awesome. Thank you in advance.
Right. So 11 minutes after posting this issue the service started working again. 🤦🏻‍♂️
They have been down since (at least) 2019-09-18 08:21:01 UTC which translates to about 7 hours.
For anyone reading this, the code above should be fine and working as intended.

Need help for audio conference using Kurento composite media element in Nodejs

I am refereeing the code from GitHub for audio AND video conference using Kurento composite media element, It work's fine for audio AND video streaming over WebRTC.
But I need only audio conference using WebRTC, I have added changes in above GitHub code and new code is uploaded on GitHub Repository.
I have added below changes in static/js/index.js file
var constraints = {
audio: true, video: false
};
var options = {
localVideo: undefined,
remoteVideo: video,
onicecandidate : onIceCandidate,
mediaConstraints: constraints
}
webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function(error) {
When I am running this code, no error for node server as well as on chrome console. But audio stream does not get start. It only showing spinner for long time. Chrome console log is here.
As per reply for my previous stack overflow question, We need to specify MediaType.AUDIO in java code like below
webrtc.connect(hubport, MediaType.AUDIO);
hubport.connect(webrtc, MediaType.AUDIO);
But I want to implementing it in Nodejs using kurento-client.js, I did not get any reference to set MediaType.AUDIO to connect with hubPort and webRtcEndpoint in Nodeja API.
Please someone can help me to do code changes for same in Nodejs or suggest me any reference so I can implement only audio conference using composite media element and Nodejs.
This should do
function connectOnlyAudio(source, sink, callback) {
source.connect(sink, "AUDIO" , function(error) {
if (error) {
return callback(error);
}
return callback(null);
});
}
We are in the process of improving the documentation of the project. I hope that this will all be made more clear in the new docs.
EDIT 1
It is important to make sure that you are indeed sending something, and that the connection between your client and the media server is negotiated correctly. Going through your bower.json, I've found that you are setting the adapter dependency as whatever, so to speak. In the latest releases, they've done some refactoring that makes the kurento-utils-js library fail. We haven't yet adapted to the new changes, so you need to fix the dependency of adapter.js like so
"adapter.js": "v0.2.9"

Is it possible to get the currently playing track in the 1.x apps api?

I am trying to update my Spotify remote control app that is currently using the legacy API to use the new 1.x API. Is it possible using the 1.x API to access information about the currently playing track? models.player.track does not seem to exist anymore (though it's in the documentation).
For the curious, I am using this for my app running in Spotify Desktop which uses websockets to talk with a Python server, which then provides a web interface for phones and tablets to remotely control the instance of Spotify running on the desktop. This works great using the legacy API and I can control playback and get the now playing info from any connected remote. I assume this app is going to stop working at some point soon since Spotify says they are retiring the legacy API. (Unless my assumption that the app will stop working is wrong, then never mind).
Thanks.
It is possible to access the current playing track loading the track property of the Player.
You would do something like this:
require(['$api/models'], function(models) {
function printStatus(track) {
if (track === null) {
console.log('No track currently playing');
} else {
console.log('Now playing: ' + track.name);
}
}
// update on load
models.player.load('track').done(function(p) {
printStatus(p.track);
});
// update on change
models.player.addEventListener('change', function(p) {
printStatus(p.data.track);
});
});
You have a working example in the Tutorial App named Get the currently playing track.

Resources