i have 3 files a1.ksh,a2.ksh,a3.ksh in same location.whenever i do modification in any one file it is reflecting in remaining 2 files why it is happening.
i am using OS linux
if anyone know why so please let me know
Related
I've built a python based application(which runs 24/7) that logs some information in a YAML file every few minutes. It was working perfectly for a few days. Suddenly after approximately after 2 weeks, one line in the YAML file was filled with NUL characters (416 characters of NUL to be precise).
Now the suspicion is that someone might've tried to open the already running application again, so both the applications tried to write/access the same YAML file which could've caused this. But I couldn't replicate this.
Just wanted to know the cause of this issue.
Please let me know if someone faced the same issue before.
Some context about the file writing:
The YAML file will be loaded in append mode and a list is written inside it using the command below:
with open(file_path, 'a') as file:
yaml.dump(summary_list, file)
Concurrent access is a possible cause for this especially when you're appending. For example, it may be that both instances opened the file and set the start marker on the same position, but let the file grow to the sum of both appended data dumps. That cause some part of the file not to be written, which might explain the NULs.
Whatever happened is more dependent on your OS and your filesystem than it is on YAML. But even if we knew that we couldn't tell for sure.
I recommend using a proper logging framework to avoid such issues; you can dump YAML as string to log it.
I want to copy excel file file from windows to Linux system and need to capture last modified date of the file.
We cant take take last modified date of the file from windows.
Since i am using below code to copy file from Windows to Linux, Code is able to transfer file to Linux system however once after running the code the timestamp of the file is changing. Is there any way to copy file from windows to linux without modifying file timestamp. Please help.
%smb_init(username=**MYID**, password=%str(**password**), domain=**aa.aaa.com**);
%smb_load();
%smb_pull(windows=//files/Load/Test/Folder1/PIC Alerts/ABC Alerts.xlsx,
linux=/sasdata/test_files/folder2/ABC_Alerts.xlsx);
If you would like to copy files while preserving attributes, enable X commands in SAS and let the OS handle copying files. After enabling it, it's as simple as using a built-in Linux file copying command like rsync.
For example, this will copy data, attributes, and timestamps:
rsync -av //source/mydata.xlsx /dest/mydata.xlsx
Once you confirm it works as expected, you can build it into your SAS program and pass it to Linux:
x 'rsync -av //source/mydata.xlsx /dest/mydata.xlsx';
The automatic macro variable &sysrc will tell you if it was successful. A value of 0 means success. Non-zero means failure.
I am looking some solutions which help me to track any changes which have been made on files. I am working on Linux system where a lot of people have access to the same files. Sometimes it is happened, that someone changed something in file and don't notify other users. So I would like to write some script to check if provide file path or files have been changed, if so then write in file let's say "controlfile_File1.txt" something like that "File changed %date, line XXX". I know that I can use md5checksum for that, but I will get only info if file changed but I would like to know which line is changed. I also think about solution to make copy of this file to some place and make some diff between copied file and current file?
Any ideas?
thanks for support.
Your question goes above the possibilities of Linux as a platform: Linux can show you the last modification date of a file, and the last time a file has been accessed (even without modifying the file), but that's it.
What are you are looking for, as already mentioned in the comments, is a version control system. As mentioned indeed, Git is one of them, but there are also others (SourceTree, SourceSafe, Clearcase, ...) each of them having their (dis)advantages.
One thing they all have in common is that modifying such a file does not go that simply anymore: at every time somebody has modified such a file (a file under the version control system), (s)he will be asked why (s)he has done this, this will also be recorded for later reference.
I have a batch that integrates an xml file time to time but could happens daily. After it integrates it puts in a folder like /archives/YYMMDD(current day). The problem is if the same file is integrated twice. So I need a script what verifys the file (with diff command its possible but risky to make a bottleneck) but the problem is I can't find to resolve how to make to give the second files location.
P.S. I can't install on the server anything.
Thanks in advance.
Recently I was asked the following question in an interview.
Suppose I try to create a new file named myfile.txt in the /home/pavan directory.
It should automatically create myfileCopy.txt in the same directory.
A.txt then it automatically creates ACopy.txt,
B.txt then BCopy.txt in the same directory.
How can this be done using a script? I may know that this script should run in crontab.
Please don't use inotify-tools.
Can you explain why you want to do?
Tools like VIM can create a backup copy of a file you're working on automatically. Other tools like Dropbox (which works on Linux, Windows, and Mac) can version files, so it backs up all the copies of the file for the last 30 days.
You could do something by creating aliases to the tools you use for creating these file. You edit a file with the tools you tend to use, and the alias could create a copy before invoking a tool.
Otherwise, your choice is to use crontab to occasionally make backups.
Addendum
let me explain suppose i have directory /home/pavan now i create the file myfile.txt in that directory , immediately now i should automatically generate myfileCopy.txt file in the same folder
paven
There's no easy user tool that could do that. In fact, the way you stated it, it's not clear exactly what you want to do and why. Backups are done for two reasons:
To save an older version of the file in case I need to undo recent changes. In your scenario, I'm simply saving a new unchanged file.
To save a file in case of disaster. I want that file to be located elsewhere: On a different computer, maybe in a different physical location, or at least not on the same disk drive as my current file. In your case, you're making the backup in the same directory.
Tools like VIM can be set to automatically backup a file you're editing. This satisfy reason #1 stated above: To get back an older revision of the file. EMACs could create an infinite series of backups.
Tools like Dropbox create a backup of your file in a different location across the aether. This satisfies reason #2 which will keep the file incase of a disaster. Dropbox also versions files you save which also is reason #1.
Version control tools can also do both, if I remember to commit my changes. They store all changes in my file (reason #1) and can store this on a server in a remote location (reason #2).
I was thinking of crontab, but what would I backup? Backup any file that had been modified (reason #1), but that doesn't make too much sense if I'm storing it in the same directory. All I would have are duplicate copies of files. It would make sense to backup the previous version, but how would I get a simple crontab to know this? Do you want to keep the older version of a file, or only the original copy?
The only real way to do this is at the system level with tools that layer over the disk IO calls. For example, at one location, we used Netapps to create a $HOME/.snapshot directory that contained the way your directory looked every minute for an hour, every hour for a day, and every day for a month. If someone deleted a file or messed it up, there was a good chance that the version of the file exists somewhere in the $HOME/.snapshot directory.
On my Mac, I use a combination of Time Machine - which backs up the entire drive every hour, and gives me a snapshot of my drive that stretches back over a year and a half) and Dropbox which keeps my files stored in the main Dropbox server somewhere. I've been saved many times by that combination.
I now understand that this was an interview question. I'm not sure what was the position. Did the questioner want you to come up with a system wide way of implementing this, like a network tech position, or was this one of those brain leaks that someone comes up with at the spur of the moment when they interview someone, but were too drunk the night before to go over what they should really ask the applicant?
Did they want a whole discussion on what backups are for, and why backing up a file immediately upon creation in the same directory is a stupid idea non-optimal solution, or were they attempting to solve an issue that came up, but aren't technical enough to understand the real issue?