Can't point Perforce workspace to a different stream - perforce

My main Perforce workspace has somehow ended up pointing to a task stream, rather than our main stream. Now it won't let me change back. When I right click and edit the workspace to try and point it at the main stream I get this error:
Client 'my_workspace' has files opened from a task stream; must revert, or revert -k before switching.
I went through every file and only found one which was mapped to the task stream workspace- I have resolved this. So now all my files are mapped to my Main workspace, and are meant for our main stream.

Related

Spring Integration: Inbound File Adapter drops files when service restarts

We're using the S3InboundFileSynchronizingMessageSource feature of Spring Integration to locally sync and then send messages for any files retrieved from an S3 bucket.
Before syncing, we apply a couple of S3PersistentAcceptOnceFileListFilter filters (to check the file's TimeModified and Hash/ETag) to make sure we only sync "new" files.
Note: We use the JdbcMetadataStore table to persist the record of the files that have previously made it through the filters (using a different REGION for each filter).
Finally, for the S3InboundFileSynchronizingMessageSource local filter, we have a S3PersistentAcceptOnceFileListFilter FileSystemPersistentAcceptOnceFileListFilter -- again on TimeModified and again persisted but in a different region.
The issue is: if the service is restarted after the file has made it through the 1st filter but before the message source successfully sent the message along, we essentially drop the file and never actually process it.
What are we doing wrong? How can we avoid this "dropped file" issue?
I assume you use a FileSystemPersistentAcceptOnceFileListFilter for the localFilter since S3PersistentAcceptOnceFileListFilter is not going to work there.
Let see how you use those filters in the configuration! I wonder if switching to the ChainFileListFilter for your remote files helps you somehow.
See docs: https://docs.spring.io/spring-integration/docs/current/reference/html/file.html#file-reading
EDIT
if the service is restarted after the file has made it through the 1st filter but before the message source successfully sent the message along
I think Gary is right: you need a transaction around that polling operation which includes filter logic as well.
See docs: https://docs.spring.io/spring-integration/docs/current/reference/html/jdbc.html#jdbc-metadata-store
This way the TX is not going to be committed until the message for a file leaves the polling channel adapter. Therefore after restart you simply will be able to synchronize the rolled back files again. Just because they are not present in the store for filtering.

Is it possible to turn off lock on the file set up by SFTP Session Factory

I'm struggle with cashed sftp session factory. Namely, I suffered from session unavailable because I used to many in my application. Currently I have one default non cashed session. Which writes file to sftp server but set up locks on them. Thus it can't be read by any other user. I'd like to avoid it. Perfectly, turn off lock after single file is uploaded. Is it possible ?
Test structure
Start polling adapter
Upload file to remote
Check whether files are uploaded
Stop polling adapter
Clean up remote
When you deal with data transferring over the network, you need to be sure that you release resources you use do to that. For example be sure to close InputStream after sending data to the SFTP. This is really not a framework responsibility to close it automatically. More over you may give us already not an InputStream, but just plain byte[] from it. That's only a reason I can think about locking-like behavior.

Azure Logic App: Why FTP Connector delays to Identify files over FTP connector?

I have been working with FTP connector in my AzureLogicApp for unzipping files in FTP server from Source to Destination folder.
I have configured FTP connector to Trigger whenever the file is added in Source folder.
The Problem I face is the delay to Trigger the connector here.
Once I add the zipfile in source folder, It would take around 1 minute for the Azure FTP connector to identify and Pick the file over FTP.
To identify if the issue is with Azure FTP connector or FTP server, I tried using BLOB storage instead of FTP server and The connector was triggered in a second.!!!
What I understand by this is, The delay happens from FTP side, or the Way FTP connector communicates with FTP server.
Can Anyone tell the areas of optimization here? What possible changes I can make to minimize this delay.?
I also noticed this behaviour of the FTP trigger and found the reason for the delay in the FTP Trigger doco here:
https://learn.microsoft.com/en-us/azure/connectors/connectors-create-api-ftp#how-ftp-triggers-work
...when a trigger finds a new file, the trigger checks that the new file is complete, and not partially written. For example, a file might have changes in progress when the trigger checks the file server. To avoid returning a partially written file, the trigger notes the timestamp for the file that has recent changes, but doesn't immediately return that file. The trigger returns the file only when polling the server again. Sometimes, this behavior might cause a delay that is up to twice the trigger's polling interval.
Firstly you need to know, the logic app file trigger has some difference from the Function, mostly it won't trigger immediately, when you set the trigger you will find it need a interval. Even there is a file, however there is a interval it won't trigger right now.
Then it's about how ftp trigger works, when it trigger the logic app, if you check the trigger history you will find it has multiple Succeeded however only one fired history and there is a 2 minutes delay. The reason you could check the connector reference: How FTP triggers work. There is a description about this.
The trigger returns the file only when polling the server again. Sometimes, this behavior might cause a delay that is up to twice the trigger's polling interval.

Spring integration file poller - file writable filter

I've a scenario where we have spring intergration file poller, waiting for files to be added to a directory, once written we process the file. We have a some large files and slow network so i some cases we are worried that the file transfer may not have finished when the poller wakes and attempts to process the file.
I've found this topic on 'file-inbound-channel-unable-to-read-file' which suggest using a custom filter to check if the file is readable before processing.
This second topic 'how-to-know-whether-a-file-copying-is-in-progress-complete' suggest that the file must be writable before it can be considered to be ready for processing.
I might have expected that this checking that the file is read/writable would already be done by the core spring intergration code?.
In the mean time i'm planning on creating a filter as per the first topic, but using the 'rw' check that the second suggests.
This is a classic problem and just checking if the file is writable is not reliable - what happens if the network crashed during the file transfer? You will still have an incomplete file.
The most reliable way to handle this is to have the sender send the file with a temporary suffix and rename it after the transfer is complete. Another common technique is to send a second file foo.done indicating that foo.xxx is complete, and use a custom filter for that.

How to know whether the file has been changed on same server or has transferred from another server in asp.net

We have a webfarm scenario on which FileSystemWatcher is used to notify the changes occured on file.When a file is changed or created on one server it's gets noticed and the changes are transferred to another servers on webfarm.Again the transferred files on other servers raise the changed event and they are synced to the same server which create a redundant sync operation.We want to sync the changes only if the changes are on the same server not on the transferred changes from other servers.How could this be possible?
Have the servers monitor a temporary directory where uploaded files get stored.
Let the FilesystemWatcher sync with the other servers at the same time it moves the file to the intended working directory.
The sync sends the file to the other servers working directory of course, thus bypassing the other FilesystemWatchers. Voila!

Resources