tailLines and SinceTime in logging api,both not worked simultaneously - node.js

I am using container engine, and my pods are hosted there.
I am trying to fetch logs, using log api :
http://localhost:8000/api/v1/namespaces/app-test/pods/designer-0/log?tailLines=100&sinceTime=2017-09-17T10:47:58Z
if i used both the query params separately, it works and show the proper result, but if i am using it simultaneously only the top 100 logs are returning, the sinceTime param is get ignored.
my scenario is, i need a log from a specific time, in a chunk like, 100 lines, 100 lines.. like this.
I am not sure, whether it is a bug, or it is not implemented.

I found this from the api reference manual
https://kubernetes.io/docs/api-reference/v1.6/
tailLines - If set, the number of lines from the end of the logs to
show. If not specified, logs are shown from the creation of the
container or sinceSeconds or sinceTime
So, that means if you specify tailLines, it start from the end. I dont see any option explicitly mentioned other than limitBytes. But you will have to play around with it as it does not guarantee number of lines.

tailLines=X tells the server to start that many lines from the end
sinceTime tells the server to start from the specified time
the options are mutually exclusive

Thanks All,
I have later on recognized that, it is not ignoring the sinceTime, as the TailLines intended functionality is return the lines from the last.
So, if i mentioned the sinceTime= 10 PM yesterday, it will return the records from that time..And if also tailLines, is mentioned, so it will return the recent logs from that chunk.
So, it was working as expected. I need to play with LimitBytes for getting the logs in chunk, from that time, Instead of full logs.

Related

correct REST API for autosuggest on google?

I feel silly asking this.. but its doing my head..
if I use 'https://maps.googleapis.com/maps/api/place/autocomplete/json' and set the input parameter to say - 'Palazzo Cast' I will get about 5 suggestions - none of which will be the one I'm looking for. if I set input to 'Palazzo Castellania' I will get zero results - even though there is a place called this (see below). I've set the region parameter to 'mt'...
If I use 'https://maps.googleapis.com/maps/api/place/findplacefromtext' and set the input parameter to 'Palazzo Castellania' - I will get 'the Ministry of Health' - which is correct - however, if I put a partial string in I'll get only a single candidate which will be something different - there doesn't seem to be a way to get multiple place candidates?
I'm guessing from an API side - I have to do a multi-step process - but it would be good to get some input.
My thoughts:
I start with 'https://maps.googleapis.com/maps/api/place/autocomplete/json' - if I get an empty result, I try 'https://maps.googleapis.com/maps/api/place/findplacefromtext'
if I get a single result from either then I can pass the placeID to the places API to get more detailed data.
Make sense? It feels argly..
Edit
So watching how https://www.google.com.mt/ does it... while typing it uses suggest (and never gives the right answer, just like the API) and then when I hit enter it uses search and gives the correct answer... leading me to the conclusion that there is actually two databases happening!
Basically "its by design".. there is no fix as of Feb 2023.. My thoughts are to cache results and do a first search against that otherwise I'll probably use bing or here

What is the best way to run a background process in a Dash app?

I have a Dash application that queries an API, based on a user search query, performs some calculations on the response, then displays the final results to the user on a Dash app. In order to provide a quick response to the user, I am trying to set up a quick result callback and a full result long_callback.
The quick result will grab limited results from the API and display results to the user within 10-15 seconds, while the full results will run in the background, collecting all results (which can take up to 2 minutes), then updates the page with the full results when they are available.
I am curious what the best way to perform this action is, as I have run into forking issues with my current attempt.
My current attempt: Using the diskcache.Cache() as the DiskcacheLongCallbackManager and a database txt file to store availability of results.
I have a database txt file that stores a dictionary, with the keys being the search query and the fields being quick_results: bool, full_results: bool, file_path: str, timestamp: dt (as str).
When a search query is entered and submit is pressed, a callback loads the database file as a variable and then checks the dictionary keys for the presence of this search query.
If it finds the query in the keys of the database, it loads the saved feather file from the provided file_path and returns it to the dash app for generation of the page content.
If it does not find the query in the database keys, it requests limited data from the API, runs calculations, saves the DataFrame as a feather file on disk, then creates an entry in the database with the search query(as the key), the file path of the saved feather file, the current timestamp, and sets the quick_results value to True.
It then loads this feather file from the file_path created and returns it to the dash app for generation of the page content.
A long_callback is triggered at the same time as the above callback, with a 20 second sleep to prevent overlap with the quick search. This callback also loads the database file as a variable and checks if the query is present in the database keys.
If found, it then checks if the full results value is True and if the timestamp is more than 0 days old.
If the full results are unavailable or are more than 0 days old, the long_callback requests full results from the API, performs the same calculations, then updates the already existing search query in the database, making the full_results True and the timestamp the time of completion for the full search.
It then loads the feather file from the file_path and returns it to the dash app for generation of the page content.
If the results are available and less then 1 day old, the long callback simply loads the feather file from the provided file_path and returns it to the dash app for generation of the page content.
The problem I am currently facing is that I am getting a weird forking error on the long callback on only one of the conditions for a full search. I currently have the long_callback setup to perform a full search only if the full results flag is False or the results are more than 0 days old. When the full_results flag is False, the callback runs as expected, updates the database and returns the full results. However, when the results are available but more than 0 days old, the callback hits a forking error and is unable to complete.
The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec(). Break on __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__() to debug.
I am at a loss as to why the function would run without error on one of the conditions, but then have a forking error on the other condition. The process that runs after both conditions is exactly the same.
By using print statements, I have noticed that this forking error triggers when the function tries to call the requests.get() function on the API.
If this issue is related to how I have setup the background process functionality, I would greatly appreciate some suggestions or assitance on how to do this properly, where I will not face this forking error.
If there is any information I have left out that will be helpful, please let me know and I will try to provide it.
Thank you for any help you can provide.

Regular Expressions and SQL Server Error Logs - All false results

Ok, I have done my searching and I have tried many things. I think it is time to put my question here:
I have been working on taking in other user's SQL Server error logs, parsing out the rows into columns, then bulk inserting the data 1000 at a time. I troubleshoot SQL Server for other people so sp_readerrorlog will only show me my local instance. Finding root cause involves 4 sets of logs (SQL Server, Application Event, System Event, and get-clusterlog outputs and matching up timestamps. A fast load into SQL Server along with the ability to pull the exact timeframe needed will shorten my time spent staring at log files.
I am currently bottlenecked in testing the rows with a regular expression, which does work if I feed it data myself:
def sqlrowmatch(row):
pattern = re.compile(r'\d\d\d\d-\d\d-\d\d\s\d\d:\d\d:\d\d.\d\d')
if pattern.search(row):
return True
else:
return False
given any string that matches above (1111-11-11 11:11:11.11) will return as true. The idea is if in a SQL Server Error Log, if this is matched, then it is a separate entry. this will allow memory graphs, deadlock graphs, and dumps to all be grouped in one entry as opposed to being split over several lines.
However, if I point it at one of the SQL Error Logs, there seems to be extra characters. This is giving re.match and re.show a hard time finding a match. If I load any line in this function,sqlrowmatch(), it reports back false for all rows.
ÿþ <-- this appears to be the first 2 characters at the first line. re.search just doesn't even find it anywhere in the in the different elements.
False is what is returned if I put the function in with the 'with open' as statement:
with open(file, 'r') as sqllog:
for line in sqllog:
print(sqlrowmatch(line))
the first line should always be true if sqlrowmatch() is used.
2018-10-13 22:40:09.41 Server Microsoft SQL Server 2016 (SP2-CU2-GDR) (KB4458621) - 13.0.5201.2 (X64)
So I am lost and my current project is at a halt. Perhaps some seasoned insight from this group can get me going again.
TIA
Interesting enough, I found my answer here: Opening huge text file, unicode issue
open should be done with encoding='utf-16'
It now matches appropriately

zabbix monitor for read only fs

Trying to monitor filesystems with zabbix. I found this : https://github.com/vintagegamingsystems/Zabbix-Read-Only-Filesystem-Check
and been trying to implement it. But I don't understand given this user parameter: UserParameter=checkro[*],/etc/zabbix/scripts/checkro.sh $1
What should be the item key. According to the documents checkro should work but I keep getting Status Unsupported. Tried posting this on zabbix forms but it takes 3-5 days for them to approve my post :/
EDIT : Files changed : /etc/zabbix/zabbix_agentd.conf I added a line for the UserParameter and added the checkro.sh script. I restarted zabbix after was (it's a container, so technically restarted the container)
What I was expecting was for checkro[something] to be supported as item key but it isn't.
[*] indicates that this item key takes parameters. The script has this line: mountPoint=$1.
Thus the item key should have the mountpoint passed as a parameter like so:
checkro[/home]
Maybe too late. But I just used this script. It works only if / and /boot. If your FS is on say /dev/MAPPER etc it does not work.

How to see output of TextOutW(...) after each call?

On writing to the display with:
::TextOutW( pDC->m_hDC, x, y, &Out, 1 );
It only shows on the screen after every 15 calls (15 characters).
For debugging purposes only, I would like to see the new character on the display after each call. I have tried ::flushall() and a few other things but no change.
TIA
GDI function calls are accumulated and called in batches for performance reasons.
You can call GdiFlush after the TextOut call to perform the drawing immediately. Alternatively, call GdiSetBatchLimit(1) before outputting the text to disable batching completely.
::flushall() is for iostreams, so it won't affect Windows screen output at all. I've never tried it, but based on the docs, I believe GDIFlush() might be what you want. You should also be able to use GDISetBatchLimit(1); to force each call to run immediately upon being called.

Resources