I have an application written based on lotus notes client. I wanted to check whether lotus notes is running before starting my application, so that I can skip asking for password from the user if "Don't prompt for password from other notes-based programs" is checked.
One method is get all the running process and look for nlnotes.exe and notes2.exe process to confirm.
Is there any other method to achieve the same.
To be more specific, I want to know whether any registry entries are made to say that notes is currently running. We can't open two instances of notes client, this made me think IBm might have used registry entry to check for running instance.
Kindly correct me if I'm wrong.
The registry would not be a good place for info like that, because if the client crashed the registry data would need to be cleaned up. The same is true for lock files. So while I can't say for sure, I believe IBM detects whether the client is already running by looking for in-memory objects - e.g., shared memory sections, mutexes, etc. Using Process Explorer, I see several shared memory sections associated with the Notes processes. One likely candidate is a section called -LTSCS-22275429-MEM9, but I don't know how that name is generated, if it ever changes with reinstall, reboot, etc. It would take a fair amount of experimentation to determine that - and then of course one would have to figure out how to write the code to detect it, but that's my best guess as to how it's done.
Related
D2R talking to ProcMon and Firefox?
I was looking for whether certain settings were being saved to a local file or remotely on the game server. Certain settings local, and others remote, got it. But along the way:
What does it mean when it says that the video game has created a file mapping recognizing the existence of ProcMon? What the heck is it doing when it says it "created" Firefox.exe? That second one in particular doesn't make the slightest bit of sense.
I was expecting to see that changing the keybind mappings would save to a local file; the game server doesn't care what button I push to do the thing. This was true. I was expecting to see that another layer of settings was not stored locally (meaning it must be sent and stored on the game server). Both of these thing were true.
I did not expect to see the game application having some kind of interaction with unrelated applications. I would like to understand how to interpret this information; I've used procmon to troubleshoot many times and have never seen something like this
I figured I'd post here, after posting on SuperUser, since I want to get input from software developers who might have encountered this scenario before!
I would like to initiate a series of validation steps on the client side on files opened within a changelist before allowing the changelist to be submitted.
For example, I wish to ensure that if a file is opened for add, edit, or remove as part of a changelist, that a particular related file will be treated appropriately based on a matrix of conditions for that corresponding file:
Corresponding file being opened for add/edit/remove
Corresponding file existing on disk vs. not existing on disk
Corresponding file existing in depot vs. not existing in depot
Corresponding file having been changed vs. not having been changed relative to depot file
These validation steps must be initiated before the submit is accepted by the Perforce server. Furthermore, the validation must be performed on the client side since I must be able to reconcile offline work with the copies on clients' disks.
Environment:
Perforce 2017.2 server
MacOS and Windows computers submitting to different branches
Investigative Avenues Already Covered
Initial design was a strictly client-side custom tool, but this is not ideal since this would be a change of the flow that users are familiar with, and I would also have to implement a custom GUI.
Among other approaches, I considered creating triggers in 2017.2; however, even if I were to use a change-content trigger with all the changelist files available on the server, I would not be able to properly perform the validation and remediation steps.
Another possibility would be using a change-submit trigger and to use the trigger script variables in 2017.2 to get the client's IP, hostname, client's current working directory, etc. so that you could run a script on the server to try to connect remotely to the client's computer. However, running any script on the client's computer and in particular operating on their local workspace would require credentials that most likely will not be made available.
I would love to use a change-submit trigger on the Perforce server to initiate a script/bundled executable on the client's computer to perform p4 operations on their workspace to complete the validation steps. However, references that I've found (albeit from years ago) indicate that this is not possible:
https://stackoverflow.com/a/16061840
https://perforce-user.perforce.narkive.com/rkYjcQ69/p4-client-side-submit-triggers
Updating files with a Perforce trigger before submit
Thank you for reading and in advance for your help!
running any script on the client's computer and in particular operating on their local workspace would require credentials that most likely will not be made available.
This is the crux of it -- the Perforce server is not allowed to send the client arbitrary code to execute. If you want that type of functionality, you'd have to punch your own security hole in the client (and then come up with your own way of making sure it's not misused), and it sounds like you've already been down that road and decided it's not worth it.
Initial design was a strictly client-side custom tool, but this is not ideal since this would be a change of the flow that users are familiar with, and I would also have to implement a custom GUI.
My recommendation would be to start with that approach and then look for ways to decrease friction. For example, you could use a change-submit trigger to detect whether the user skipped the custom workflow (perhaps by having the custom tool put a token in the change description for the trigger to validate), and then give them an error message that puts them back on track, like "Please run Tools > Change Validator, or contact wanda#yourdomain.com for help"
Last week I worked on an incident, where it was stated that a Java agent caused to terminate a server. First dagnose was that objects probably were not being recycled in a sufficient manner.
After some testing (and bring down the server multiple times :-) ) I notified in my Notes client that the view was corrupt.
I could have avoid this if I were able to check if a view is OK or not.
for a database I can check if it exists
for a view I can check if it exists
But can I also check if a view is in good condition or not? or is only a client (Notes, Admin) capable of doing this?
I wish there was a programmatic way Patrick. The fixup task (load fixup -C) is one of the sure fire ways to get details of corruption, but not helpful to you in this scenario.
We have built a project in Enterprise Guide for the purpose of creating a easy understandable and maintainable code. The project contain a set of process flows which run should be done in specific order. This project we need to run on a Linux Server machine, where the SAS Metadata Server is running.
Basic idea is to extract this project into SAS code, which we would be able to run from command line in Linux as a batch job.
Question 1:
Is there any other way to schedule a batch job in Linux-hosted SAS Server? I have read about VBS scripting for scheduling/running batch jobs, but in order this to be done on Linux Server, a installation of WINE is required, which on a production machine which already runs a number of other important applications, is almost completely out of question.
Is there a way to specify a complete project export into SAS code, provided that I give the specific order of running process flows? I have tried out ordered list, which is able to make you a list of tasks to run in order (although there is no way to choose a whole process flow as a single task), but unfortunately, this ordered list itself is later not possible to be exported as a SAS code.
Current solution we do is the following:
We export each single process flow of the SAS EG project into SAS code, and then create another SAS code with %include lines to run all the extracted codes in order that we want. This is of course a possible solution, but definitely not the most elegant one.
Question 2:
Since I don't know how exactly the code is being exported afterwards, are there any dangers I should bear in mind with the solution I chose.
Is there any other, more elegant way?
You have a couple of options from what I'm familiar with, plus I suspect if Dom happens by he'll know more. These answers are based on EG 6.1, which is the current version (ships with 9.4); it's possible some of these things may not be true in earlier versions.
First, if you're running Enterprise Guide from Windows, you can schedule the job locally (on any Windows machine with Enterprise Guide). You're not scheduling the server directly, you schedule Windows to launch an EG process that connects to the server and does its magic. That's how I largely interact with scheduling (because I have a fairly 'light' scheduling need).
Second, from the blog post "Four Ways to Schedule SAS Tasks", options 3 and 4 may be helpful for you. The SAS Platform Suite is designed in part for scheduling, and the options using SAS Management Console to schedule via operating system tools, are both very helpful.
Third, you may want to look into SAS Stored Processes, which should be schedulable. A process flow can be converted into a stored process.
For your specific questions:
Question 1: When you export a process flow or a project, at least in 6.1 you have the option to change the order in which the programs are exported. It's manual, so it's probably not perfect, but it does give you that option. (The code seems to be by default in creation order, which is sub-optimal.) The project export does group process flows together, but you don't have the option of manipulating the order of process flows - you have to move each program around, which would be tedious. It also of course gives you less flexibility if you need to multiply run programs.
Question 2: As Stig Eide points out in comments, make sure your System Option LRECL is > 256 (the default) or you run some risk of code being cut off. In 9.2+ this is modifiable; just place LRECL=32767in your config.sas file.
The application must be desgined in such a way that it must support multiple users while commiting into SVN Repositiry. I'm done with the application and the related stuff, however, i'm stuck with this multi-user thing. How i can achieve this? I saw somewhere that for every thread you have to instantiate a separate SVNRepository driver. This tell it's not thread safe..or may be i'm getting the whole thing wrong. Any help on this issue is appreciated. thanks.
I get the above info (the one in italics) from Here .
I'm SVNKit developer, let me explain how the things work.
SVNRepository class represents one SVN connection with it's own credentials. It is thread-unsafe that means that you can perform only sequential operations on it. See this article for more details:
http://vcs.atspace.co.uk/2012/09/21/are-svnkit-methods-reenterable/
So if your application tries to create several commits at the same time, you should use several independent SVNRepository instances. Good news is that no special synchronization code is required, all synchronization is performed on the server side. Another good news is that when commit for a certain SVNRepository object is finished or cancelled, you can reuse the connection to start another commit. But note that if you use http protocol, you can't reuse the same connection to commit on behalf another user even if you change credentials for the connection (SVNRepository#setAuthenticationManager).
To create a commit without working copy use SVNRepository#getCommitEditor which starts the commit transaction. To stop the transaction use either ISVNCommitEditor#closeEdit or ISVNCommitEditor#abortEdit; you can perform other operations on SVNRepository instance, until commit transaction is finished.
ISVNCommitEditor instance should describe your virtual working copy: it tells to SVNKit about your current knowledge of the latest working copy state. If the description doesn't correspond to the real latest change you get "File or directory is out of date; try updating" error.
http://vcs.atspace.co.uk/2012/07/20/subversion-remote-api-committing-without-working-copy/
You can use -1 instead of real revision in ISVNEditor#openFile/openDir to disable checks, but that can cause another problem: you could overwrite changes without knowing about them.
Another option is to commit using real working copies and real changes on filesystem (using SvnOperationFactory#createCommit). But even in this case have a look at the first link to learn which objects can/can't be reused across threads.
Hope this helps, if you have other questions, feel free to ask on SVNKit mailing list.