I need some information from my GPSD server running on my NTP master server.
Amount of satellites it is seeing
Which satellites it is using for the positon fix (maybe also SNR)
Which satellites it is seeing since there are a lott of them (is this possible?)
I am going to output this to PHP, so it must be simple
The GPSD source contains the file gpsd.php which can deliver the current position and satellite info ("skyview") either as a finished HTML page or as a JSON string. So you need to make sure a web server with PHP support runs on your master server and you can call http://ntp-server/path/to/gpsd.php to get it. Append ?op=json to the URL to return the JSON result.
You can get just the php file here: https://github.com/yazug/gpsd/raw/master/gpsd.php
Beat Bolli: I think you meant this one: https://github.com/yazug/gpsd/raw/master/gpsd.php.in (they have renamed it)
It suggests to use ?poll; function, but it hangs to me when I try to read the response...
Related
I'm coding a program to do some action with webdriver and Autoit in python, I want to do two things before I start selling my code:
Add onetime activation code to my software on one PC, like to make the program works only on one pc.
Make my program able to receive updates from the internet once I add to the code some more features or correct some others.
is it possible only with Python? or what is the method statement to do it?
On client side you need use hard-disk serial or/and uuid of partition or Operational System timestamp + something to generate serial code .
On server side , you need a API to store hard disk serial to validate if this is a computer valid . And you client on load check if activaction is valid .
The second question i can't answer .
Regarding the second part of your question:
Create a text file with the latest version of your application and put it on your webserver e.g. http://download.example.com/example-app-version.txt to get and read its value later.
On your python code download and read the text (for Python 3+ use 'import urllib.request' and urllib.request.urlretrieve) when your app runs and compare it against the installed version (if statement).
E.g.
if latestVer > installedVer:
#update
else:
#application continues
I'm trying to build web interface for GIT on Node.js.
Currently I have one problem: wrong Unicode encoding while 'git commit'. Commit message are shown in gibberish in log. And I have no clue on which step in which way I need to correct.
At this point I have:
1) UTF-8 encoded HTML page for interface;
2) Node.js child_process.spawn() to execute git commands;
3) ["-C",repo.path,"commit","-m",post.msg] as an argument list to pass to git;
When I execute the same command from git shell (Under Windows if it matters) - everything is fine.
Any suggestions?
Thanks in advance!
Update
I guess I won't have this question answered, but still add one detail:
it feels like somewhere message is converted from UTF8 to ISO 8859-1
Update2
Looks like 8859-1 - is my default CMD.exe (who proceed my commands) encoding... still have no idea on what to do with it.
The cause of problem was not about git, cmd or node.js. It was my stupid mistake.
On client I wrapped data into encodeURIComponent before send. On server unwrapped it with unescape. It took too much time to notice it.
Now, after I replaced unescape by decodeURIComponent, it works perfectly well.
I read #Dmitri 's original example of how to use fastcgi_finish_request() question and tried to follow the example in the answer in my Kohana 3.1 setup in index.php:
echo Request::factory()
->execute()
->send_headers()
->body();
Right after that, I added:
fastcgi_finish_request();
sleep(5);
Initially, I thought it worked. But then I realised in only worked for every other request. Example:
Navigate to localhost (works, no pause)
Click link to localhost/controller (pause 5 seconds)
Click another link to localhost/controller (works again, no pause)
And it continues on like that. Am I missing something? Like maybe a setting in php5-fpm config file?
Running PHP 5.3.5-1ubuntu7.2 with Suhosin-Patch, Nginx
Call session_write_close() before you call fastcgi_finish_request() to resolve this issue:
session_write_close();
fastcgi_finish_request();
sleep(5);
Next to the server-response itself (which you can control with the fastcgi_finish_request function and rest assured it works that way), there can be other resources that is blocking the (next) script from starting right ahead.
This can be file-lockings (popular for session) and other stuff. As you have not shared much code and we do not see your Kohana configuration you should take a look which components you use and which resources they acquire.
Is it because your web server only handles one php instance at a time and it is still exciting the previous script?
I have a log file and want to create a webpage (possibly Python but not strictly) that will work much like unix "tail -f filename" command works (show new log lines when they are written to file).
So that user will continuously see log right in browser.
How would you implement this?
Tailon is a python webapp that, among other things, provides tail -f like functionality. In addition, wtee (a sister project of tailon) can make all its stdin viewable in the browser - its use is identical to the unix tee command: tail -f filename | wtee
I implemented this using jquery (.ajax) and php (json).
The flow is essentially as follows:
user calls an html page on their browser
html page contains an initial jquery .ajax call to a remote php script on the server that performs the required function, in this case, retrieving a few of the last lines of the file being 'tailed'
if no new lines are available, the php script just loops (wile the ajax caller waits, ie longpolling), and can be configured to time out if necessary (returning an appropriate value back to the ajax calling function on the client)
when new lines are detected by the php script, they are wrapped in a json response and sent back to the ajax calling function on the browser, which then appends it to the existing content of the page.
The javascript function will then recursively make the same ajax call, effectively sitting in an infinite loop.
In my specific implementation, i did the following:
both the ajax call on the client AND the php script on the server have timeouts to handle, for example, broken connections nicely. Also ensures the ajax call does not wait forever.
the ajax call passes a line number as a reference back to the server to tell it what the last line number was that it received, so the server knows which lines to return. Initially the value is zero, and the server will immediately return the last 10 lines of the file
when the php script is called, it uses the clients last line number to do a quick check on the file; if new lines have already been added it returns them immediately, if not it sits in a loop (1 second) and then instead checks the files ctime (or mtime) to detect when new lines are written. This is more effective than counting the lines in the file (which could be huge) every second.
See my longpolling/realtime tail implementation using jquery and php here:
https://github.com/richardvk/web_file_tail
Scullog, having capability of sharing the local drive to the browser. Stream the log file via Socket.IO over browser. It run on any platform such as windows/linux/mac. It run as service or standalone mode.
You read the file and print the last lines to the page. You might also use a GET-variable to you define number of rows to output using ?n=x where x is the number of lines.
I building my sites on the localhost (runs wamp on windows), and when I upload it to my server, I always get
"Cannot modify header information - headers already sent"
I understand that there shouldn't be any blank lines and everyhing, and usually this works out. but now I need to redirect someone after the header has been sent, how can I make my server act like my localhost ?
i'm using cpanel and WHM:
cPanel 11.25.0-R42399 - WHM 11.25.0 - X 3.9
CENTOS 5.4 x86_64 virtuozzo on vps
I will appreciate any help
In short, you need to prevent PHP from outputting anything to the browser before you get to the point where you want to use the header() function.
This should be done by careful programming practices, of which your 'no blank lines' is one, or by storing PHP's output in an output buffer, and only outputting when you're ready for it.
See the ob_start() and ob_flush() methods. You use ob_start() at the start of your application. This disables output and stores it into a buffer. When you're ready to start outputting, use ob_flush() and PHP will send the buffer's contents to the browser, including the headers that are set till that point. If you don't call ob_flush() then the buffer is output (flushed) at the end of the script.
The reason why it works on your WAMP development environment is most likely that output buffering is already enable by default in the php.ini. Quite often these all-in-one packages enable a default buffer for the first 4k bytes or so. However, it is generally better to explicitly start and flush the buffer in your code, since that forces better coding practices.
Well,
I guess by more thinking and better programing you can manage to keep all redirects before any HTML is written.
This problem solved by the old rules...
#user31279: The quickest and dirtiest way I know of is to use # to suppress the warning, so e.g.
#header('Location: some-other-page.php');