Is it possible to do simple edits to transform a black and white SVG to a grayscale one? - svg

I am using mscgen to create an image for documentation purposes of a complex set of events for one of my tools.
I put that image on my website (bottom of that page) and the problem is that the lines tend to disappear when resized to a scale that fits the page. (Update: the answer by Sander fixed the problem, there is a PNG screenshot of what I was seeing, you may want to enlarge it to see at 1:1 scale.)
I am thinking that if it were marked as a grayscale image, instead of black and white, then the scaling may work better. Is that at all possible?
Unfortunately Stackoverflow does not let me upload an SVG image... I can put part of the source if requested. You may find the source by following the link (see above.) However, there my msc code and you can recreate the SVG image with the following command:
mscgen -T svg -o snapinit.svg snapinit.msc
The input code (snapinit.msc):
msc {
hscale = "2";
a [label="snapinit"],
b [label="snapcommunicator"],
c [label="snapserver"],
d [label="snapbackend (permanent)"],
e [label="snapbackend (cron)"],
f [label="neighbors"],
g [label="snapsignal"];
d note d [label="images, page_list, sendmail,snapwatchdog"];
#
# snapinit initialization
#
a=>a [label="init()"];
a=>a [label="--detach (optional)"];
|||;
... [label="pause (0 seconds)"];
|||;
a=>>a [label="connection timeout"];
a=>b [label="start (fork+execv)"];
|||;
b>>a;
#
# snapcommunicator initialization
#
b=>b [label="open socket to neighbor"];
b->f [label="CONNECT type=frontend ..."];
f->b [label="ACCEPT type=backend ..."];
... [label="or"];
f->b [label="REFUSE type=backend"];
|||;
... [label="neighbors may try to connect too"];
|||;
f=>f [label="open socket to neighbor"];
f->b [label="CONNECT type=backend ..."];
b->f [label="ACCEPT type=frontend ..."];
... [label="or"];
b->f [label="REFUSE type=frontend"];
#
# snapinit registers with snapcommunicator
#
|||;
... [label="pause (10 seconds)"];
|||;
a=>a [label="open socket to snapcommunicator"];
a->b [label="REGISTER service=snapinit;version=<version>"];
b->a [label="READY"];
a->b [label="SERVICES list=...depends on snapinit.xml..."];
a=>a [label="wakeup services"];
|||;
b->a [label="HELP"];
a->b [label="COMMANDS list=HELP,QUITTING,READY,STOP"];
#
# snapinit starts snapserver which registers with snapcommunicator
#
|||;
... [label="pause (0 seconds)"];
|||;
--- [label="...start snapserver..."];
a=>>a [label="connection timeout"];
a=>c [label="start (fork+execv)"];
c>>a;
c=>c [label="open socket to snapcommunicator"];
c->b [label="REGISTER service=snapserver;version=<version>"];
b->c [label="READY"];
#
# snapinit starts various backends (images, sendmail, ...)
#
|||;
... [label="pause (<wait> seconds, at least 1 second)"];
|||;
--- [label="...(start repeat for each backend)..."];
a=>>a [label="connection timeout"];
a=>d [label="start (fork+execv)"];
d>>a;
d=>d [label="open socket to snapcommunicator"];
d->b [label="REGISTER service=<service name>;version=<version>"];
b->d [label="READY"];
b->d [label="STATUS service=snapwatchdog"];
|||;
... [label="pause (<wait> seconds, at least 1 second)"];
|||;
--- [label="...(end repeat)..."];
#
# snapinit starts snapback (CRON task)
#
|||;
... [label="...cron task, run once per timer tick event..."];
|||;
a=>>a [label="CRON timer tick"];
a=>a [label="if CRON tasks still running, return immediately"];
a=>e [label="start (fork+execv)"];
e>>a;
e=>e [label="open socket to snapcommunicator"];
e->b [label="REGISTER service=snapbackend;version=<version>"];
b->e [label="READY"];
|||;
e=>>e [label="run CRON task 1"];
e=>>e [label="run CRON task 2"];
...;
e=>>e [label="run CRON task n"];
|||;
e->b [label="UNREGISTER service=snapbackend"];
|||;
... [label="...(end of cron task)..."];
#
# STOP process
#
|||;
--- [label="snapinit STOP process with: 'snapinit stop' or 'snapsignal snapinit/STOP'"];
|||;
g->b [label="'snapsignal snapinit/STOP' command sends STOP to snapcommunicator"];
b->a [label="STOP"];
... [label="...or..."];
a->a [label="'snapinit stop' command sends STOP to snapinit"];
...;
a->b [label="UNREGISTER service=snapinit"];
a->b [label="STOP"];
b->c [label="snapserver/STOP"];
b->d [label="<service name>/STOP"];
b->e [label="snapbackend/STOP"];
c->b [label="UNREGISTER service=snapserver"];
c->c [label="exit(0)"];
d->b [label="UNREGISTER service=<service name>"];
d->d [label="exit(0)"];
e->b [label="UNREGISTER service=snapbackend (if still running at the time)"];
e->e [label="exit(0)"];
... [label="once all services are unregistered"];
b->f [label="DISCONNECT"];
}

Remove the shape-rendering="crispEdges" attribute from the svg tag (line 6 in your svg)
Browsers usually switch off anti-aliasing when shape-rendering="crispEdges" - and that's what's needed in this situation.
Use an other value for shape-rendering (e.g. "auto" or "geometricPrecision") or - remove the attribute and you're set.
(Initially I thought that png's rendered by mscgen could relieve your sorrows as well, but while it looks ok in firefox & chrome, safari's rendition is less appealing)
Reference: The 'shape-rendering' property at w3.org

Related

How can I get Monitor Connection information from Wayland?

When running under X, I can get information about the monitor connection as follows:
$ xrandr --verbose
DP-0.8 connected 2560x1440+2560+0 (0x1bd) normal (normal left inverted right x axis y axis) 597mm x 336mm
...
SignalFormat: DisplayPort
supported: DisplayPort
ConnectorType: DisplayPort
ConnectorNumber: 3
_ConnectorLocation: 3
non-desktop: 0
supported: 0, 1
2560x1440 (0x1bd) 241.500MHz +HSync -VSync *current +preferred
...
This information is available to applications via XRRListOutputProperties in #include <X11/extensions/Xrandr.h>
Is there an equivalent when running under Wayland? Specifically I need the information provided by ConnectorType and ConnectorNumber

Console Screen Buffer Info shows incorrect X position

I recently found a great short code Why the irrelevant code made a difference? for obtaining console screen buffer info (which I include below) that replaces the huge code accompanying the standard 'CONSOLE_SCREEN_BUFFER_INFO()' method (which I won't include here!)
import ctypes
import struct
print("xxx",end="") # I added this to show what the problem is
hstd = ctypes.windll.kernel32.GetStdHandle(-11) # STD_OUTPUT_HANDLE = -11
csbi = ctypes.create_string_buffer(22)
res = ctypes.windll.kernel32.GetConsoleScreenBufferInfo(hstd, csbi)
width, height, curx, cury, wattr, left, top, right, bottom, maxx, maxy = struct.unpack("hhhhHhhhhhh", csbi.raw)
# The following two lines are also added
print() # To bring the cursor to next line for displaying infp
print(width, height, curx, cury, wattr, left, top, right, bottom, maxx, maxy) # Display what we got
Output:
80 250 0 7 7 0 0 79 24 80 43
This output is for Windows 10 MSDOS, with clearing the screen before running the code. However. 'curx' = 0 although it should be 3 (after printing "xxx"). The same phenomenon happens also with the 'CONSOLE_SCREEN_BUFFER_INFO()' method. Any idea what is the problem?
Also, any suggestion for a method of obtaining current cursor position -- besides 'curses' library -- will be welcome!
You need to flush the print buffer if you don't output a linefeed:
print("xxx",end="",flush=True)
Then I get the correct curx=3 with your code:
xxx
130 9999 3 0 14 0 0 129 75 130 76
BTW the original answer in the posted question is the "great" code. The "bitness" of HANDLE can break your code, and not defining .argtypes as a "shortcut" is usually the cause of most ctypes problems.

Can OpenCV's VideoWriter write in a separate process?

I'm trying to save a video to disk in a separate process. The program creates a buffer of images to save on the original process. When its done recording, it passes the file name and image buffer to a second process that will make its own VideoWriter and save the file. When the second process calls write, however, nothing happens. It hangs and doesn't output any errors.
I checked if the VideoWriter is open already and it is. I tried moving the code to the original process to see if it worked there and it does. I don't know if it is some setting I need to initialize in the new process or if it has to do with the way VideoWriter works.
Here's my code
def stop_recording(self):
"""Stops recording in a separate process"""
if self._file_dump_process is None:
self._parent_conn, child_conn = multiprocessing.Pipe()
self._file_dump_process = multiprocessing.Process(
target=self.file_dump_loop, args=(child_conn, self.__log))
self._file_dump_process.daemon = True
self._file_dump_process.start()
if self._recording:
self.__log.info("Stopping recording. Please wait...")
# Dump VideoWriter and image buffer to process
# Comment out when running on main procress
self._parent_conn.send([self._record_filename, self._img_buffer])
""" Comment in when running on main procress
fourcc = cv2.VideoWriter_fourcc(*"MJPG")
effective_fps = 16.0
frame_shape = (640, 480)
record_file = cv2.VideoWriter(self._record_filename, fourcc,
effective_fps, frame_shape,
isColor=1)
for img in self._img_buffer:
self.__log.info("...still here...")
record_file.write(img)
# Close the file and set it to None
record_file.release()
self.__log.info("done.")
"""
# Delete the entire image buffer no matter what
del self._img_buffer[:]
self._recording = False
#staticmethod
def file_dump_loop(child_conn, parent_log):
fourcc = cv2.VideoWriter_fourcc(*"MJPG")
effective_fps = 16.0
frame_shape = (640, 480)
while True:
msg = child_conn.recv()
record_filename = msg[0]
img_buffer = msg[1]
record_file = cv2.VideoWriter(record_filename, fourcc,
effective_fps, frame_shape,
isColor=1)
for img in img_buffer:
parent_log.info("...still here...")
record_file.write(img)
# Close the file and set it to None
record_file.release()
del img_buffer[:]
parent_log.info("done.")
Here's the log output when I run it on one process:
2019-03-29 16:19:02,469 - image_processor.stop_recording - INFO: Stopping recording. Please wait...
2019-03-29 16:19:02,473 - image_processor.stop_recording - INFO: ...still here...
2019-03-29 16:19:02,515 - image_processor.stop_recording - INFO: ...still here...
2019-03-29 16:19:02,541 - image_processor.stop_recording - INFO: ...still here...
2019-03-29 16:19:02,567 - image_processor.stop_recording - INFO: ...still here...
2019-03-29 16:19:02,592 - image_processor.stop_recording - INFO: ...still here...
2019-03-29 16:19:02,617 - image_processor.stop_recording - INFO: ...still here...
2019-03-29 16:19:02,642 - image_processor.stop_recording - INFO: ...still here...
2019-03-29 16:19:02,670 - image_processor.stop_recording - INFO: done.
Here's the log output when I run it on a second process:
2019-03-29 16:17:27,299 - image_processor.stop_recording - INFO: Stopping recording. Please wait...
2019-03-29 16:17:27,534 - image_processor.file_dump_loop - INFO: ...still here...
I tried this, and was successful with the following code:
import cv2
cap, imgs = cv2.VideoCapture('exampleVideo.MP4'), []
# This function writes video
def write_video(list_of_images):
vid_writer = cv2.VideoWriter('/home/stephen/Desktop/re_encode.avi',cv2.VideoWriter_fourcc('M','J','P','G'),120, (640,480))
for image in list_of_images: vid_writer.write(image)
# Loop to read video and save images to a list
for frame in range(123):
_, img = cap.read()
imgs.append(img)
write_video(imgs)
cap.release()
Everything worked as expected, and when I checked how long it took to run, I found that the above code took .13 seconds to read the video and .43 seconds to write the video. If I read the video and write the video in the same loop (below) the total processing time is .56 seconds (which is .13+.43).
# Loop to save image to video
for frame in range(123):
_, img = cap.read()
vid_writer.write(img)
There is a big disadvantage of writing the images to a buffer (in memory) first, and then writing the images to a video file (on the hard drive). The buffer is saved on the RAM which will fill up very quickly, and you will likely get a memory error.

How to Generate Grok Patterns automatically using LogMine

I am trying to generate GROK patterns automatically using LogMine
Log sample:
Error IGXL error [Slot 2, Chan 16, Site 0] HSDMPI:0217 : TSC3 Fifo Edge EG0-7 Underflow. Please check the timing programming. Edge events should be fired in the sequence and the time between two edges should be more than 2 MOSC ticks.
Error IGXL error [Slot 2, Chan 18, Site 0] HSDMPI:0217 : TSC3 Fifo Edge EG0-7 Underflow. Please check the timing programming. Edge events should be fired in the sequence and the time between two edges should be more than 2 MOSC ticks.
For the above logs, I am getting the following pattern:
re.compile('^(?P<Event>.*?)\\s+(?P<Tester>.*?)\\s+(?P<State>.*?)\\s+(?P<Slot>.*?)\\s+(?P<Instrument>.*?)\\s+(?P<Content1>.*?):\\s+(?P<Content>.*?)$')
But I expect a Grok Pattern(Logstash) that looks like this:
%{LOGLEVEL:level} *%{DATA:Instrument} %{LOGLEVEL:State} \[%{DATA:slot} %{DATA:slot} %{DATA:channel} %{DATA:channel} %{DATA:Site}] %{DATA:Tester} : %{DATA:Content}
Code: LogMine is imported from the following link: https://github.com/logpai/logparser/tree/master/logparser/LogMine
import sys
import os
sys.path.append('../')
import LogMine
input_dir ='E:\LogMine\LogMine' # The input directory of log file
output_dir ='E:\LogMine\LogMine/output/' # The output directory of parsing results
log_file ='E:\LogMine\LogMine/log_teradyne.txt' # The input log file name
log_format ='<Event> <Tester> <State> <Slot> <Instrument><content> <contents> <context> <desc> <junk> ' # HDFS log format
levels =1 # The levels of hierarchy of patterns
max_dist =0.001 # The maximum distance between any log message in a cluster and the cluster representative
k =1 # The message distance weight (default: 1)
regex =[] # Regular expression list for optional preprocessing (default: [])
print(os.getcwd())
parser = LogMine.LogParser(input_dir, output_dir, log_format, rex=regex, levels=levels, max_dist=max_dist, k=k)
parser.parse(log_file)
This code returns only the parsed CSV file, I am looking to generate the GROK Patterns and use it later in a Logstash application to parse the logs.

When changing a file name, Recording Start is overdue for 3 seconds.

Using Two ASFWriter Filters in a graph.One is making wmv file,
Anather is for live streaming.
Carrying out streaming,
When changing a file name, Recording Start is overdue for 3 seconds.
so,The head of a New WMV is missing.
It's troubled.
CAMERA ------ InfTee Filter --- --- AsfWriter Filter → WMV FIle
X
Microphone --- InfTee Filter2 --- --- AsfWriter Filter2 → Live Streaming
void RecStart()
{
...
ConnectFilters(pInfTee,"Infinite Pin Tee Filter(1)",L"Output1",pASFWriter,"ASFWriter",L"Video Input 01"));
ConnectFilters(pInfTee,"Infinite Pin Tee Filter(2)",L"Output2",pASFWriter2,"ASFWriter",L"Video Input 01"));
ConnectFilters(pSrcAudio,"Audio Source",L"Capture",pInfTee2,"Infinite Pin Tee Filter",L"Input"));
ConnectFilters(pInfTee2,"Infinite Pin Tee Filter(1)A",L"Output1",pASFWriter,"ASFWriter",L"Audio Input 01"));
ConnectFilters(pInfTee2,"Infinite Pin Tee Filter(2)A",L"Output2",pASFWriter2,"ASFWriter",L"Audio Input 01"));
pASFWriter2->QueryInterface(IID_IConfigAsfWriter,(void**)&pConfig);
pConfig->QueryInterface(IID_IServiceProvider,(void**)&pProvider);
pProvider->QueryService(IID_IWMWriterAdvanced2, IID_IWMWriterAdvanced2, (void**)&mpWriter2);
mpWriter2->SetLiveSource(TRUE);
mpWriter2->RemoveSink(0);
WMCreateWriterNetworkSink(&mpNetSink);
DWORD dwPort = (DWORD)streamingPortNo;
mpNetSink->Open(&dwPort);
mpNetSink->GetHostURL(url, &url_len);
hr =mpWriter2->AddSink(mpNetSink);
pGraph->QueryInterface(IID_IMediaEventEx,(void **)&pMediaIvent);
pMediaIvent->SetNotifyWindow((OAHWND)this->m_hWnd,WM_GRAPHNOTIFY,0);
pGraph->QueryInterface(IID_IMediaControl,(void **)&pMediaControl);
pMediaControl->Run();
}
void OnTimer()
{
pMediaControl->Stop();
CComQIPtr<IFileSinkFilter,&IID_IFileSinkFilter> pIFS = pASFWriter;
pIFS->SetFileName(NewFilename,NULL);
pMediaControl->Run();
}
---------------------------------------------------------------------------
→ I think ... In order to wait for starting of streaming,
it is missing for 3 seconds in head of New WMV File.
Are there any measures?
---------------------------------------------------------------------------
When you restart the graph, you inevitably miss a fragment of data due to initialization overhead. And, it is impossible to switch files without stopping the graph. The solution is to use multiple graphs and keep capturing while the part with file writing is being reinitialized.
See DirectShow Bridges for a typical solution addressing this problem.

Resources