Sharing global dictionary between requests - python-3.x

I am currently developing an online real-time game which uses a global dictionary to store all the game rooms. If a user try to enter a game, the script checks the dictionary looking for an empty room. If no empty room is found, a new room object is added to the dictionary so other logged users can enter a game room.
The problem is that using a global dictionary for such task is not a good idea as pointed in these questions: Are global variables thread safe in flask? How do I share data between requests? and Preserving global state in a flask application
In the answers, it was recommended storing requests shared data in databases or memcached.. In case I wanted to do it using the database way, should I store the entire dict in the database every time it was requested? Is there a better and secure way to do this?

should I store the entire dict in the database every time it was requested
If you use a database (like SQLite), the entire dict should already be in the database. You can then query the database whenever you need information about the game rooms. Do not keep the entire dict with shared data in memory, move all the shared data into the database, remove the dict with shared data from memory, query the database whenever you need shared data and update the database when shared data is changed.
I suggest you try it out. I think you will find the database fast (and secure) enough.
Note that a database also has ACID properties which you can use and rely on. The value of these ACID properties might not be clear at the moment, but that can change the more you use a database.

Related

can you user node server as a potential storage for minimal JSON object?

I have an JSON object the has the size of about 350kb a list of items about 1500 item , I don't want to keep it on the front end but yet I don't want to use database either can I store it in node.js and call it from there each time the data are needed ? I have no idea if this can be considered a bad practice what do you think ?

Best way to run a script for large userbase?

I have users stored in postgresql database (~10 M) and i want to send all of them emails.
Currently i have written a nodejs script which basically fetches users 1000 at a time (Offset and limit in sql) and queues the request in rabbit MQ. Now this seems clumsy to me, as if the node process fails at any time i have to restart the process (i am currently keeping track of number of users skipped per query, and can restart back at the previous number skipped found from logs). This might lead to some users receiving duplicate email and some not receiving any. I can create a new table with new column indicating whether email has been to that person or not, but in my current situation i cant do so. Neither can i create a new table nor can i add a new row to existing table. (Seems to me like idempotent problem?).
How would you approach this problem? Do you think compound indexes might help. Please explain.
The best way to handle this is indeed to store who received an email, so there's no chance of doing it twice.
If you can't add tables or columns to your existing database, just create a new database for this purpose. If you want to be able to recover from crashes, you will need to store who got the email somewhere so if you are given hard restrictions on not storing this in your main database, get creative with another storage mechanism.

Restricting access to Excel source data

I have an Excel template which reads data from a source Excel file using vlookups and Index/Match functions. I there a way to prevent the end user from accessing the source data file/sheet? e.g. by storing the source file on a remote location and make the vlookups read from there..
Depending on what resources are available to you, it may be difficult to prevent users from just going around the restrictions you put in place. Even if the data is in a database table you will need measures in place to prevent users from querying it outside of your Excel template. I don't know your situation, but ideally there would be someone (i.e. database administrator, infosec, back-end developer) who could help engineer a proper solution.
Having said that, I do believe your idea around using MS SQL Server could be a good way to go. You could create stored procedures instead of using sql queries to limit access. See this link for more details:
Managing Permissions with Stored Procedures in SQL Server
In addition, I would be worried about users figuring out other user IDs and arbitrarily accessing data. You could implement some sort of protection by having a mapping table so that there's no way to access information with user IDs. The table would be as follows:
Columns: randomKey, userId, creationDate
randomKey is just an x digit random number/letter sequence
creationDateTime is a time stamp and used for timeout purposes
Whenever someone needs a user id you would run a stored procedure that adds a record to the mapping table. You input the user id, the procedure creates a record and returns the key. You provide the user with the key which they enter in your template. A separate stored procedure takes the key and resolves to the user id (using the mapping table) and returns the requested information. These keys expire. Either they can be single use (the procedure deletes the record from the mapping table) or use a timeout (if creationDateTime is more than x hours/days old it will not return data).
For the keys, Mark Ransom shared an interesting solution for creating random IDs for which you could base your logic:
Generate 6 Digit unique number
Sounds like a lot of work, but if there is sensitivity around your data it's worth building a more robust process around it. There's probably a better way to approach this, but I hope it at least gives you food for thought.
No, it's not possible.
Moreover, you absolutely NEED these files open to refresh the values in formulas that refer them. When you open a file with external references, their values will be calculated from local cache (which may not be equal to actual remote file contents). When you open the remote files, the values will refresh.

Robot's Tracker Threads and Display

Application: The purposed application has an tcp server able to handle several connections with the robots.
I choosed to work with database/ no files, so i'm using a sqlite db to save information about the robots and their full history, models of robots, tasks, etc...
The robots send us several data like odometry, tasks information, and so on...
I create a thread for every new robot's connection to handle the messages and update the informations of the robots on the database. Now lets start talk about my problems:
The application got to show information about the robots in realtime, and I was thinking about using QSqlQueryModel, set the right query and the show it on a QTableView but then I got to some problems/ solutions to think about:
Problem number 1: There are informations to show on the QTableView that are not on the database: I have the current consumption on the database and the actual charge on the database in capacity, but I want to show also on my table the remaining battery time, how can I add that column with the right behaviour (math implemented) in my TableView.
Problem number 2: I will be receiving messages each second for each robot, so, updating the db and the the gui(loading the query) may not be the best solution when I have a big number of robots connected? Is it better to update the table, and only update the db each minute or something like this? If I use this method I cant work with the table with the QSqlQueryModel to update the tables, so what is the approach that you recommend me to use?
Thanks
SancheZ
I have run into similar problem before; my conclusion was QSqlQueryModel is not the best option for display purposes. You may want some processing on query results, or you may want to create, remove, change display data based on the result for a fancier gui. I think best is to implement your own delegates and override the view related methods - setData, setEditor
This way you have the control over all your columns and direct union of raw data and its display equivalent (i.e. EditData, UserData).
Yes, it is better if you update your view real-time and run a batch execute at lower frequency to update the big data. In general app is the middle layer and db is a bottom layer for data monitoring, unless you use db in memory shared cache.
EDIT: One important point, you cannot run updates in multiple threads (you can, but sqlite blocks the thread until it gets the lock) so it is best to run update from a single thread

How to create a appropriate database model for the IM

recently we're developing the IM feature for our app. And we would save the chat record with core data. The strategy we make are:
every account has a separate sqlite file.
every chat has a separate table (dynamic created, refer to this article ), however, the table structure is the same. such as,
sender_id
msg_id
content
msg_send_time
...
If we put all the chat message in a table, and we fetch the records by "fromid and toid" to get a specific dialog records. However, if we have thousands of thousands message in this table, we doubt the fetch request would be very slow. so we create a specific table for each dialog.
So, is there any better solution for this problem?
Creating "tables" for conversations dynamically is a very bad idea. This will create so much overhead that it will make your code completely inefficient.
Instead, use single entity (not table, mind you, Core Data is not a database) to capture the messages. Filter by user IDs.
This will perform without a glitch with 100.000s of messages, far more than should be stored or displayed on a mobile device.

Resources