Can we parallelize the email download using IMAP JavaMail API? - multithreading

Can I use multiple threads to download the emails using the store?
Is there any sample available anywhere?

Each open folder uses a separate connection to the server and operations on that folder are serialized by that connection so there's a limit to how much parallelism you can get with a single folder. Some servers will allow you to open the same mailbox multiple times (using separate Folder objects), resulting in multiple connections.

Related

Programatically move or copy Google Drive file to a shared Windows folder/UNC

I have a Google Apps Script that builds a new csv file any time someone makes an edit to one of our shared google sheets (via a trigger). The file gets saved off to a dedicated Shared Drive folder (which is first cleaned out by trashing all existing files before writing the updated one). This part works splendidly.
I need to take that CSV and consume it in SSIS so I can datestamp it and load it into a MSSQL table for historical tracking purposes, but aside from paying for some third-party apps (i.e. CDATA, COZYROC), I can't find a way to do this. Eventually this package will be deployed on our SQL Agent to run on a daily schedule, so it will be attached to a service account that wouldn't have any sort of access to the Google Shared drive. If I can get that CSV over to one of the shared folders on our SQL server, I will be golden...but that is what I am struggling with.
If something via Apps Script isn't possible, is there someone that can direct me as to how I might then be able to programmatically get an Excel spreadsheet to open, refresh its dataset, then save itself and close? I can get the data I need into Excel out of the Google Sheet directly using a Power Query, but need it to refresh itself in an unattended process on a daily schedule.
I found that CData actually has a tool called Sync which got us what we needed out of this. There is a limited-options free version of the tool (that they claim it's "free forever") that runs as a local service. On a set schedule, it can query all sorts of sources, including Google Sheets, and will write out to various destinations.
The free version has limited availability in terms of the sources and destinations you can use (though there are quite a few), but it only allows 2 connection definitions. That said, you can define multiple source files, but only 1 source type (i.e. I can define 20 different Google Sheets to use in 20 different jobs, but can only use Google Sheets as my source).
I have it setup to read my shared google sheet and output the CSV to our server's share. A SSIS project reads the local CSV, processes it as needed, and then writes to our SQL server. Seems to work pretty well if you don't mind having an additional service running, and don't need a series of different sources and destinations.
Link to their Sync landing page: https://www.cdata.com/sync/
Use the Buy Now button and load up the free version in your cart, then proceed to check out. They will email you a key and a link to get the download from.

How download 600k emails from a POP3 server

How to download all emails (632,000 emails) from a POP server? Currently MacOS Mail limits me to 200,000 emails. Is there a client capable of doing the job without limitation? I do not have access to the server configuration, I am a user.
Wow, that quite a collection! Provided there's no way of using IMAP. although i haven't tried to do this myself thunderbird could quite possibly do it, as i don't believe there is a limit along as you don't run out of disk space or RAM, and Attachments will be compressed.

what does 'scaling node to multiple instances' mean

I've been building a web app in node (and have built others in asp.net-mvc) and recently trying to understand the concept of scaling an app to many users. Assuming a web app gets thousands (or millions) of concurrent users, I understand that the load should be split to more than one instance of node.
my question is, do all these instances run on the same server (virtual machine)? if so, do they (should they) access the same database? if so, does this mean (if I use nginx) that ngingx would be responsible for routing the different requests to the different node instances?
and assuming uploaded files are saved on the file system, do the different instances access the same directories? if not, if a person uploads images to the file system, and connects later and is routed to a different instance of node, how does he access the images he uploaded earlier? is there some sort of sync process done to the file systems?
any resources/articles regarding this would be very helpful!

Sync two mongodb databases from different locations programmatically

I have a web application (using MongoDB database, AngularJS on front-end and NodeJS on back-end) that deployed on 2 places. First is on static ip so that it can access from anywhere and second is on one local machine so that user can use it when the internet connection is not available. So on both places, data can be inserted by user. My requirement is to sync the both databases, when internet connection is available on local machine i.e. from local system database to remote system database and vice-versa without loosing any data on both places.
One way I am thinking about is provide the sync button in the application and sync the databases using insert/update query. I am not sure is there any better and automated way to do this task so that the databases sync automatically like data copied in replica set.
Please provide the best solution to do this task. Thanks in advance.

Server Farm Sync

What is the preferred method of keeping a server farm synchronized? It's currently a pain to have to upload to multiple servers. Looking for a balance of ease of use and cost. I read somewhere that a DFS can do it, but that's something that requires the servers to run on a domain. Are there any performance issues with using a DFS?
We use SVN to retain the server files in specific repositories and then have a script that executes to pull the latest files out of SVN onto each of the servers in the webfarm (6 servers). This employs the TortoiseSVN utility as it has an easier command line interface for the admins and updates all the machines from a single server, usually the lowest IP address in the pool.
We ensure no server has any local modifications for the checked out repository to avoid conflicts and we get a change log with the file histories in SVN with the benefits of roll back too. We also include any admin scripts so these get the benefit of versioning and change logs.

Resources