Fetching data from running application - linux

Is there an efficient method to find specific data in the memory of a running application on Linux?
So far, I found that I can use /proc/[pid]/maps to access the memory of the running application. But how can I find specific data in it? For example, if the running application is firefox, how could I find the html code of the current window? Or how could I at least find the title of the current firefox window?
Is there a solution to find virtual addresses of variables containing known data? Could I use D-bus to achieve any of this?
In most cases, the data I would like to fetch is displayed in a GUI, if that makes it any easier.

Related

electron store data -> read/write from/to json file on network drive

I'm currently working on a small electron application that is meant to be installed on all local machines in our firm and should be able to read/write
to our shared network drive.
So far I have been using electron-store because of its simplicity.
I'm very new to electron (or nodejs for that matter) development and electron-store gives me a nice and easy way to store data.
So what I'm looking for exactly is:
A way (or node module) to store data locally (as it's default with electron-store) as well as store data (preferabley the same way) on a network drive.
I'd like to organize all data inside an object and save/read to/from .json.
You think that is possible with electron-store, cause I couldn't figure it out?
Thanks in advance for your suggestions!

Generate background process with CLI to control it

To give context, I'm building an IoT project that requires me to monitor some sensor inputs. These include Temperature, Fluid Flow and Momentary Button switching. The program has to monitor, report and control other output functions based on those inputs but is also managed by a web-based front-end. What I have been trying to do is have a program that runs in the background but can be controlled via shell commands.
My goal is to be able to do the following on a command line (bash):
pi#localhost> monitor start
sensors are now being monitored!
pi#localhost> monitor status
Temp: 43C
Flow: 12L/min
My current solution has been to create two separate programs, one that sits in the background, and the other is just a light-weight CLI. The background process listens to a bi-directional Linux Socket File which the CLI uses to send it commands. It then sends responses back through said socket file for the CLI to then process/display. This has given me many headaches but seemed the better option compared to using network sockets or mapped memory. I just have occasional problems with the socket file access when my program is improperly terminated which then requires me to "clean" the directory by manually deleting the socket file.
I'm also hoping to have the program insure there is only ever one instance of the monitor program running at any given time. I currently achieve this by capturing my pid and saving it to a file which I can look for when my program is starting. If the file exists, I self terminate with error. I really don't like this approach as it just feels too hacky for me.
So my question: Is there a better way to build a background process that can be easily controlled via command line? or is my current solution likely the best available?
Thanks in advance for any suggestions!

Node Webkit / Electron Get apps running in background

Is it possible to fetch list of apps running in the background using nodewebkit or electron ?
I want to make an app for rendering secure videos ... so I want to make sure that apps like screen grab or screen recording isn't running in the background
There are is a package for that:
https://www.npmjs.com/package/ps-node
However
I will suggest that this is a poor solution for blocking screen capture. Trying to pursue this further will end up in a cat and mouse like game trying to block avenues of capture.
Its trivial to start an application with a different name to get past your list
or if trying to block by comparing executable hashes building the application with a different compiler can change the hash.
If you are ok with not having absolute security and making more of a best effort then i suggest you should explore, in addition to the above, using operating system support for blocking screencapture such as:
https://code.msdn.microsoft.com/windowsapps/Disable-screen-capture-00efe630

Sending command-line parameters when using node-windows to create a service

I've built some custom middleware on Node.js for a client which runs great in user space, but I want to make it a service.
I've accomplished this using node-windows, which works great, but the client has occasional large bursts of data so I'd like to allocate a little more memory using the --max-old-space-size command line parameter. Unfortunately, I don't see how to configure that in my service set-up wrapper for node-windows.
Any suggestions?
FWIW, I'm also thinking about changing how I parse the data, e.g. treating it more as a stream, but since this is the first time I've used Node and the project is going live in a couple of days, I'm hoping to find a quick and dirty option that'll get us to an up-and-running status easily, to be adjusted later.
Thanks!
Use node-windows v0.1.14 or higher. The ability to add flags was merged in this version. The more appropriate issue related to this is https://github.com/coreybutler/node-windows/issues/159.

Call Visitors web stat program from PHP

I've been looking into different web statistics programs for my site, and one promising one is Visitors. Unfortunately, it's a C program and I don't know how to call it from the web server. I've tried using PHP's shell_exec, but my web host (NFSN) has PHP's safe mode on and it's giving me an error message.
Is there a way to execute the program within safe mode? If not, can it work with CGI? If so, how? (I've never used CGI before)
Visitors looks like a log analyzer and report generator. Its probably best setup as a chron job to create static HTML pages once a day or so.
If you don't have shell access to your hosting account, or some sort of control panel that lets you setup up chron jobs, you'll be out of luck.
Is there any reason not to just use Google Analytics? It's free, and you don't have to write it yourself. I use it, and it gives you a lot of information.
Sorry, I know it's not a "programming" answer ;)
I second the answer of Jonathan: this is a log analyzer, meaning that you must feed it as input the logfile of the webserver and it generates a summarization of it. Given that you are on a shared host, it is improbable that you can access to that file, and even if you would access it, it is probable that it contains then entries for all the websites hosted on the given machine (setting up separate logging for each VirtualHost is certainly possible with Apache, but I don't know if it is a common practice).
One possible workaround would be for you to write out a logfile from your pages. However this is rather difficult and can have a severe performance impact (you have to serialize the writes to the logfile for one, if you don't want to get garbage from time to time). All in all, I would suggest going with an online analytics service, like Google Analytics.
As fortune would have it I do have access to the log file for my site. I've been able to generate the HTML page on the server manually - I've just been looking for a way to get it to happen automatically. All I need is to execute a shell command and get the output to display as the page.
Sounds like a good job for an intern.
=)
Call your host and see if you can work out a deal for doing a shell execute.
I managed to solve this problem on my own. I put the following lines in a file named visitors.cgi:
#!/bin/sh
printf "Content-type: text/html\n\n"
exec visitors -A /home/logs/access_log

Resources