I'm using GetOrgChart to load data retrieved from json. In this json I have base64 images that are quite heavy, so to see my org chart I have to wait like one minute.
I try to load images separately on renderNodeEvent, but when I receive the image (with an asynchronous call) the org chart is already displayed and I'm not able to see the retrieved image.
Someone has any advice?
Thank you, Sara
Related
Problem:
I have huge .csv files with 16,000+ lines of data and I intend to upload them to a Google Sheet via AWS Lambda (Node.js). The only problem is that the Google Sheet API has a read/ write limit of 300 actions per minute (it would take 53 minutes until all 16,000 lines are added) and it would take me too long to split the dataset into pieces of 300, add them and wait. I have tried uploading it in one go (which im currently doing) but then the data is just in one cell, that's not my goal. Is there a way (I'd be grateful for any documentation or article) for me to upload the data in one go and split it later? Or maybe tell Google Sheets to put the data in single cells itself instead of me doing that?
What have I tried?
I have tried several things already. I have uploaded the entire file in one go but that resulted in just one cell to be written with roughly 16,000 lines of data.
I have also tried waiting a minute after my read/write limit of 300 is reached and then write again but this resulted in huge wait times of 50+ minutes (that would be too expensive since it runs on AWS Lambda).
I'm at my wits end and can't seem to find a solution to my problem. I'd be really grateful for a solution, piece of documentary or even an article. I tried finding any resources of my own but to no avail. Thank you in advance, if question arise, feel free to ask me and I'll provide you with more information.
So the thing is that i have some data in my Mongodb that i want to represent in a dashboard,
And its taking some time to fetch the selected documents from different collections and do the calculations needed to send the results back to the client.
So i had this idea to pre-write the required data in the required format in a dedicated collection and whenever the client asks for the dashboard i just fetch its data directly, so that i don t have to wait to fetching data across different collections and to do the calculations when he asks for it.
by the way these data are not getting updated frequently… lets say about 100 updates max per day.
Does this idea sound right or it has some drawbacks that i didn t think about?
Thank you in advance,
That's caching, your idea sounds just right.
everyone
I already made a timeseries table and line graph in the dashboard.
On 17 August 2022 is work properly, suddenly now I don't know why the data can't be displayed all the time on line graph and timeseries table. this a bugs or is there any way i can do to fix this problem?
Please give me an advice, thanks
*Update
the device is still on, and can send the latest telemetry data
telemetry data SS
, but it can't display time series data and line graphs.
time series image
line graph image
Without further information it's hard to tell what's the issue in your case. There could be several causes for this like device inactivity etc.
Please investigate the Log file thingsboard.log if there are any issues while sending the data to ThingsBoard.
Alternatively, take a look at the API Usage dashboard to verify that new data is stored in ThingsBoard.
Disclaimer - I am not a software guy so please bear with me while I learn.
I am looking to use node red as a parser/translator by taking data from a CSV file and sending out the rows of data at 1Hz. Let's say 5-10 rows of data being read and published per second.
Eventually, I will publish that data to some Modbus registers but I'm not there yet.
I have scoured the web and tried several examples, however, as soon as I trigger the flow, Node.Red stops responding and I have to delete the source CSV,(so it can't run any more) and restart node.red in order to get it back up in running.
I have many of the Big Nodes from this guy installed and have tried a variety of different methods but I just can't seem to get it.
If I can get a single column of data from a CSV file being sent out one row at a time, I think that would keep me busy for a bit.
There is a file node that will read a file a line at a time, you can then feed this through the csv node to parse out the fields in the CSV into an object so you can work with it.
The delay node has a rate limiting function that can be used to limit the flow to processing 1 message per second to achieve the rate you want.
All the nodes I've mentioned should be in the core set that ships with Node-RED
In AcaniUsers, I'm downloading the closest 20 users to me and displaying their profile pictures as thumbnails in a table view. User & Photo are both Resources because they each have an id (MongoDB BSON ObjectId) on the server. Each user has a unique_id. Each Photo has four different sizes (images) on the server: square: 75x75, square#2x: 150x150, large: 320x480, large#2x: 640x960. But, each device will only have two of these sizes, depending on whether it's an iPhone 3 or 4 (retina display). Each of these sizes has their own MongoDB collection. And, all four images for each Photo have the same BSON ObjectId's across these four collections.
In the future, I may give User a relationship called photos to allow a user to have more than one photo. Also, although I don't foresee this, I may add more Image sizes (types).
The fresh attribute on Image tells me whether I've downloaded the latest Image. I set this to NO whenever the Photo's ID has changed, and then back to yes after I've finished downloading the Image.
Should I store the four different images in Core Data or on the file system and just store their URLs in Core Data? I read somewhere that over 1 or 2MB, you should store in file system, not Core Data. So, I was thinking of storing the square images in Core Data and the large images in the file system, but I'd rather store them all the same way to make things easier. So, maybe I'll just store them all in the file system? What do you think?
Do you think I should discard the 75x75 & 320x480 sizes since pretty soon iPhone 3's will be gone?
How can I improve my design of the entities, and their attributes and relationships. For example, is the Resource entity even beneficial at all?
I'm displaying the Users with an NSFetchedResultsController. However, it doesn't know when the User's image gets updated, so the images don't show up until I scroll aggressively the first time. How do I let the NSFetchedResultsController know that a user's thumbnail has finished downloading? Do I have to use KVO?
To answer your questions:
1 I'd store them all in the file system and record the URL in the database. I've never been a big fan of storing image data in the DB. Plus it'll simplify things a little to have all of the image storage uniform. That way in your image loading code you don't have to worry about if it's a type that's stored in the DB or on the file system.
2 No, I wouldn't do that yet. The iPhone 3 is going to be around for a bit longer. ATT is still selling them as the cheap entry level iPhone. I just saw a commercial the other night advertising them for $49.
3 Remove the Resources entry and add the id attribute to each of the classes. How you did it is actually bad. Abstract entities should only be used when you have a couple of entities that are almost identical and only have a few differences between them. Under the hood, Core Data will make only one table for an abstract entity and all of its children. So right now you're going to end up with only one table that will contain both your user and photo entries which can be bad when you're trying to query just type of entity.
You should also delete the Image entity and move its attributes into the Photo entity. The Photo will always have those values associated with it and the same values won't be shared between photos. Having them as a separate entity will cause a slow down. You'll either need to load them with the photos which will require a join (slow) or they'll be loaded one at a time when you access either the data or fresh attributes which is also slow. When each of the faults is fired in the latter scenario a separate query and round trip to the disk will happen for each object. So when you loop through your pictures for display in the table, you'll be firing n queries instead of one which can be a big difference in performance.
4 You can use KVO to do it. Have your table cell observer the User or Picture (depends on if you have the Picture already added to the user and are changing the data or if you're adding a new picture to the user on load completion). When the observer gets triggered, update the image being displayed.