Variable value randomly changes in node.js - node.js

I have built a node.js app with this structure:
In app.js:
var myList = ['0'];
app.get('/webpage',function(req,res){
console.log(myList);
res.render('webpage.ejs', {exps: myList});
});
On "webpage" I can display myList and there is also a form that allows me to add elements to myList. Let's say I append '1' to myList through this form.
I have the following problem I don't know how to debug:
locally on my computer, this app works fine: I can see ['0','1'] in my console each time I refresh "webpage".
online on Heroku, when I refresh "webpage" sometimes I see ['0','1'] sometimes I see just ['0'] and a couple of refresh later I see ['0','1'] again: it is like myList randomly oscillates between its default value when the app was first launched and the value that was specified later.
I use the same npm and node versions locally and on heroku and the same versions of dependencies. To my knowledge I have the same environment locally and heroku, so I have no idea where this problem could be coming from.

You may be running multiple instances of on Heroku, in which case each request might be assigned to a different instance, each with its own process and memory space.
I believe Heroku also shuts down instances after a period of inactivity, so that might be an issue too.
If you intend to persist something, how about storing it in a database?

Related

How do I save data across a node.js server?

I'm using Heroku to host a node.js server where a variable that stores the number of times every user that used the site has clicked something on it. When clicked, the variable gets increased by 1. However, Heroku does this thing where inactivity for 15 mins causes the site to go to sleep and everything is reset. I tried to use node.js to write to a file and save it but it seems the files are also reset. Does anyone know a way to get the data saved even after Heroku declares it inactive?
There is no way around it, since Heroku gets rid of files after inactivity. You need some external storage like a MongoDB set up somewhere else.

how to configure graphql url in prefect server 0.13.5

After upgrading from 0.12.2 to 0.13.5 a connectivity issue came up with the graphql component. Prefect server is running in a different server but the graphql url remains http://localhost:4200/graphql. server.ui.graphql_url was working great with version 0.12.2 but now I can't find any way to configure the graphql url properly.
Below you will find the config.toml:
$ cat ~/.prefect/config.toml
[logging]
level = "INFO"
[api]
url = "http://192.168.40.180:4200"
[server.database]
host_port = "6543"
[context.secrets]
SLACK_WEBHOOK_URL = 'https://hooks.slack.com/services/xx/XX/Xx'
[server.ui]
graphql_url = "http://192.168.40.180:4200/graphql"
In the image you can see a POC of the case.
I'm a lit bit confused about the old and the new way to configure the prefect server. Have you any idea about this issue?
EDIT: The ticket I mentioned below has been closed; when 0.13.9 is released, it'll contain a new runtime config apollo_url (which is more accurate since that's the container we're looking for anyway), which is inserted into a static settings file in the UI build, which is fetched when the application starts. This should hit all the points mentioned below.
This is a change from Prefect Server ^0.13.0, which removed the graphql_url variable as a configurable environment variable.
The previous version of Server used a find-replace on the UI code, which is compiled and minified at image build time. The reason for this is that it moves the burden of installing the required Node modules and building the application away from client-side installations and onto Prefect at release-time instead, since these can take a long time (10+ minutes each) in containerized environments. However, the downside is that modifying environment variables, which are injected at build time, requires a less than desirable lookup of the previously-injected variables, which means modifying these requires pulling a new image.
We chose to ship the new version with an in-app input, which allows changing the Server endpoint at browser run-time instead. This gives the flexibility of a single UI instance to connect to any accessible Server installation, leveraging local storage to persist this setting between browser sessions.
That said, we have a ticket open to re-expose the default configuration in a better way than in the previous version. You can follow that ticket here.

Test environment beside practical environment for nodejs applications

Before I asked, I`ve searched a lot and there are many articles about it. But my question is a little deeper.
I have an application using Nodejs Expressjs + MongoDB + Reddit + PM2 clustering mode + Bitcoin and card getaway + API system.
My problem is when I'm developing this application in real mode and it`s really awful. sometimes I release little updates in codes, and I press "pm2 log" it shows me some error in syntax or something else and I try to fix that and release again. During this time, the application with many users is down.
Also, I have to say something such as Bitcoin payment, needs real tests. Needs request and response from Blockchain. How can I have a test environment that I can test everything exactly same as real mode and then if everything was fine, then deploy that to real mode?
An environment that easy to code and test then easy to deploy? Can Mocha help me exactly what I need? I`m using PM2 clustering mode.
Your question is not a proper question, but rather a layer of questions, some opinion-based, some too broad to answer. But let's try breaking it down.
The stated problem is when I'm developing this application in real mode.... I release updates ... it shows me syntax error... application is down. I will read it as the main problem is that you're developing on a production environment. Let's forget for a while how lousy a practice that is, and let's focus on something constructive.
Let's define rough steps to take.
Live environment
The most pressing problem seems to be that you work on live app, where crashing it during dev means crashing it for your users as well. Let's deal with that.
Immediatelly change all your access codes, keys, usernames and passwords so that you store them in an environment file (which is safe to encrypt and backup, but not commited in source code), say, environment-prod.env.
Then create a second set of credentials for all the services that you use. For MongoDB, e.g. it's easy, just create a local database instance, called, say, test_database. For Reddit, create a second app, cal it my-app-test, for example. Some services might have an option to create a set of test credentials right there in the app, with others you'll simply have one app for test, one for production.
Create a new environment file, e.g. test-environment.env, with all the same keys (e.g. REDDIT_APPID, REDDIT_SECRET, MONGODB_URL, BLOCKCHAIN_GATEWAY_KEY etc), but new values.
Now, for one, you have a test environment. Make an alias, e.g. alias dev="cd $HOME/projects/my-reddit-bitcoin-app && source test-environment.env". Every day you come to work on the app, type dev, then you can start pm2 etc and work safely in dev environment. Your users will never see your crashes.
Only when you're sure you have a new feature or bugfix completed, switch environment (source environment-production.env) and then deploy the new app to the server where it runs, and pm2 restart or whatever you use for these deployments. Switch back to test env immediately before working on the code anew.
Read up more on how to separate test/prod environments. Read a bit on git workflows (e.g. branch off of latest master to a feature-branch or bugfix branch, when tested, merge it back in. Then tag it "release-" and deploy to production. Then go automate all that if possible.)
Testing
Mocha is perfectly suitable for running tests for a Node/Express app. It's the tests that matter.
You say bitcoin payment....Needs request and response. Let's see how to do that.
Add [nock])(https://www.npmjs.com/package/nock) to your app (npm i -D nock).
import it and put it on top of your test file. E.g. at the top of the some-test.spec.js file:
const nock = require('nock')
start recording requests e.g. add this in your before() block in the app:
describe('My tests', function () {
before(function () {
nock.recorder.rec();
});
// ... tests
Now, run one test at a time (e.g. write one test that does one specific task from your app) and check what's in the console. E.g. if you make a request (request.post('http://reddit.com/api/submit', jsonData)), you'll see nock printing the exact response (in JSON format) in console, as the test runs. Copy that in the test file, e.g. put it at the bottom as:
var testResponse = <whatever was in the console in json format. Or string, whatever>. // homework is to find out why var and not const, if this is at the end of the file.
Now stop the recorder (comment it out), and in your actual test, run this instead:
const pipe = nock('http://www.example.com')
.get('/resource')
.reply(200, testResponse);
do that for all your requests.
Now what you have is a test setup so that when you change the code, it should not run against the real Reddit api, or real payment gateway api, but get your mocked responses instead. Pair it up with some good assertions and you should be fine. Make sure you mock everything. If you add new types of requests, make sure to record them, and add them to your procedure.
Now, all this is very vague. Broad. Just one way to do it. Lengthy process. Probably not the best one. Not tailored to your specific conditions. But it should get you started. Take those things, step by step, and if you get stuck, come back to Stackoverflow. But do start working on it, because your current method seems to be unsustainable in the long run.

How can my Flask gunicorn app remember its state?

I am running a python3/flask app on gunicorn in heroku. This app presents the user with a list of items, pulled in from an API call, for the user to accept or reject. Depending on whether the user hits the accept or reject links associated with each item, the app appends the item to either an internal list of accepted items are one of rejected items.
Currently I'm storing each list (suggestions, acceptances and rejections) as a pandas dataframe object within the app.
i.e. I initialise my app with an empty data frame:
app = Flask(__name__)
app.accepted = pd.DataFrame()
app.suggested = pd.DataFrame()
populate the suggestions data frame with an API call:
#app.route("/get_suggestions")
def get_suggestions():
app.suggested = <some data returned from an API>
then append a suggested item to the accepted dataframe once an 'accept' link is hit:
#app.route("/accept/<suggest_id>")
def accept_item(suggest_id):
app.accepted(len(app.accepted)) = app.suggested.loc[int(suggest_id)]
This all works fine running on gunicorn on my local miniconda virtual environment (running "heroku local web") but when deployed on heroku I keep getting "Internal Server Error". When i look at the logs, it looks like the app's internal variables (e.g. app.suggested) are not being preserved, such that when accept_item is run, app.suggested is always empty. Why would they be preserved on the local version but not on the heroku deployment?
What's the simplest way of preserving this state? I'd like a small number of multiple users to be able to use the app and each build their own temporary lists. Do i need to use SQLite to preserve state? Do i need to drop a cookie into the user's browser so i can tell different users apart? I'd prefer not to require users to create an account on my site.
You can do one of two things:
Either save this state into cookies (which I would not recommend), or
Store this data into a database like Heroku Postgres (for instance), and use SQLAlchemy or some other ORM to retrieve and store that data.
Heroku dynos (which run your web application) are stateless. They reboot at will, don't persist disk, etc.
Storing global state won't work because those variables you're defining are changing on every incoming request.
You could use the app context to store this state, but since you're running on Heroku this is asking for trouble since the dynos will restart at random.

Two concurrent requests getting mixed up in Node.js app

I am completely stumped by what I noticed today in my app.
I have an app written in Node.js running on nginx with MongoDB in the backend. I have an 'authenticateUser' cal which takes in a username and password. It then queries MongoDB to retrieve the users document and checks if the password matches up.
We wrote a script which basically calls 'authenticateUser' in a loop 100 times. It worked fine. No errors. Now we ran the same script from 2 terminals, one for user bill and the other for user sam. We started seeing failures on both the terminals. I would say about 10% of the requests failed with invalid password errors.
When we inspected the log files, we were completely surprised to see bill's username getting mixed up with sam's password. We have no idea what's going on. We must be doing something obviously wrong. What is it? Aren't the two requests completely isolated from each other?
Do you use global variables often? Missing var is a common cause of such errors.
And yeah, i'd like to see a source code...

Resources