I am currently working with the nest thermostat and I would like to get/set data locally because local is faster and more secure. And because you don't have to deal with the Nest server (The Nest server is working fine btw - no complains about that).
But I am not able to connect to it locally so.
If I ping to the nest, I get a normal response. It may be a little slow but it is a response.
But that's it. To get a connection the thermostat refuses everything else.
Does someone know a solution?
Unfortunately this isn't possible and is intended to be impossible for the Nest.
The only thing I could think of is if you set up a local Firebase server and isolated both the Firebase server and the Nest device behind a router and while simulating the proper Nest calls, also simulated the responses. This is just a theoretical answer and I don't think it is possible or it would be very difficult if it were.
Related
I'm currently running two servers, both on my local machine. One is Spring, the other is Flask. The Spring server handles major business logic of my application, whereas the Flask one handles light database access operations. Additionally, I have a React Node.js server that interacts with both of these. I am considering merging the Flask server into my Spring server to solve this problem, but I am curious to know the correct way of connecting to two servers.
I am using a React proxy to use the port of the Spring server, and I'm making my calls on that, whereas my Flask server I connect to with 'http://127.0.0.1:5000/' via fetch. This has been working great when accessing the site from the hosting PC, but when trying to access it remotely via port forwarding, the Flask server cannot be accessed. This makes sense, because React is looking at the localhost of that machine.
My question is: other than merging the two servers, how can I connect to both of them remotely like this? Is there something big I am missing? I am of course happy to provide all files or information necessary, and as this is my first time doing a project like this, I hope what I've explained is sufficient.
EDIT: Just as an aside, I think some of this is due to Flask's default behavior of not allowing outside connections. I will update this more if anything comes of my tests.
EDIT2: I tested this even more and found that the Flask server access works great when the react proxy points to it, whereas the Spring server fails. The opposite behavior I have now. So it isn't about Flask default behavior, but the fact that React can only proxy into one server at a time.
I have got this working, and a good answer was posted on SO already, just under different phrasing, so I didn't find it. I will link the post here!
Multi server solution
Specifically, I just wasn't familiar with the setupProxy.js file, and this is the solution to this problem.
I have two questions a bit theoretical. I searched in the Internet but I didn't get a clear answer.
My first question:
I would like to develop an app in MEAN stack (Mongo, Angular8, NodeJS server), but I don't want to have a central server connected with the database somewhere and all the clients just connect to it remotely. I want to deploy the whole app (mongo database, server back-end, angular front-end) locally in a standalone pc. Yes, the user would have to activate both database and server services and yes, he would have to use the app through the browser at the localhost address, but I don't want him to be able to see the code. Is it possible or do you have in mind any trick to achieve that?
My second question:
Can I directly link mongo database with the Angular8 code without interfere the NodeJS API's?
I know that my questions are a bit generic, but I am not looking for huge answers, rather if my questions are possible and some tips on how to move on.
For the first question: Your user will always be able to see the compiled code (through the developer console for example) but not the source code of the angular application.
I have been looking at various solutions around but when I put it all together, it looks very confusing.
I am trying to implement pm2 cluster mode for my application which has socket.io implementation. Now, I understand the concept that statelessness is required in order to make my app work properly in cluster mode. And socket.io is NOT stateless. The confusion is,
1) Our friend Cam says that just implementing socket.io-redis would work fine when we'll spawn on the maximum number of CPUs.
2) While socket.io says and I quote,
Note: sticky-session is still needed when using the Redis adapter."
For 1), According to my research, Internet should disagree that it would work. Maybe Mr. Cam got lucky to have transport method as websocket and might not had to deal with polling. But at the same time I think it should work, since redis-adapter is what we are using to make it stateless.
INFO: It worked for me with websocket as transport method but I couldn't test it with polling.
For 2), I think we can combine Mr. Joni's advice to run it with "pm2" IN "cluster" but on different ports. And then our beloved nginx's upstream group with ip_hash would give us kind of same effect.
Additionally, I want to make my application elastic. NOT just on cluster level but both scale up and out. What are the best practices given that my application included socket.io implementation and session token management in redis?
Am I missing something or am I totally wrong here? Which would be the best way to scale?
I've got the solution!!! And it is working perfectly fine for me! Thanks to #elad and contributors. I've done some extensive amount of testing(more than 2 MONTHS!) and never had a problem. I'll not disrespect the author by explaining what the snippet does as it has already been described enough, line-by-line.
It took me long enough because there were open issues on the repo and had to be sure. And now I am sure that those issues are workable with understanding of different components. Afterall, this is not magic!
Have a look and let me know if you still have doubts/questions.
I'm looking for something to make my 2 running apps to communicate with each other.
I have two apps - main.js and events.js and I want to send event from events.js to main.js. I don't even know what to google for that coz everything I can find seems a little bit outdated and I'm looking something different then redis. I found out I can use uuid to communicate between different node.js processes but I don't know how. Any information would be great!
Your best bet is to use a message queue system similar to Kue.
This way you will be able to make your apps communicate to each other even if they are on different servers.
If you want to work without the redis backend you can skip the filesystem entirely and move to Sockets/TCP communication, which is one of the ways of getting around semaphores. This is a common method of low latency communication used by applications on the same machine or across the network. I have seen and used this method in games as well as desktop/server applications.
Depending on how far down the rabbit hole you want to go there are a number of useful guides about this. The Node.js TCP API is actually pretty great.
Ex.
https://www.hacksparrow.com/tcp-socket-programming-in-node-js.html
I am developing a chat application using the nodejs/socket.io on the server side
Now, it is time to test how scalable it is
so, i think i can simulate a large number of soket.io clients effectively using nodejs also , but running the client code this time
the question is, How can i run the socket.io client library on nodejs? is this possible?
if so, can anyone please provide a simple example
my code is running fine on the browser, with the usual development load, the issue is not about the code is running or not, actually i am not planning to run the same client code, just openning a large number of connections , and sending thousands of messages to have a preliminary figure about scalability and resource consumption
also, any suggestion on testing socket.io server scalability will be appreciated
thanks a lot
What you're looking for isn't going to really be helpful. Even if you could simulate client-side socket.io in a node process, it wouldn't have the same dynamic properties as actual access from browsers. You'd be able to determine things like the maximum number of connections you could handle without running out of resources, but your general performance metrics would be pretty artificial and not generalizable.