I have paired my Android phone with Ubuntu 18.04.1 and playing music on phone(the a2dp source). I can hear music on Ubuntu(the a2dp sink), send commands to phone (such as Play/Pause/Next etc) using dbus and media api of BlueZ and also get track info ( Title, Duration, track Position, etc.)
The problem:If I am playing a song of duration say 5min currently at 2:10, dbus query gives me correct Position of around 130000ms, but if I manually change the position on phone to say suppose 2:30, dbus query still shows time of around 130-140k ms which means it is calculating the position on its own based on last sync, and unaware of any changes made from phone. However, if I pause and play again, the position is synced again and shows correct position.
The dbus query:
dbus-send --print-reply --system --dest=org.bluez /org/bluez/hci0/dev_44_C3_46_7B_2D_C7/player0 org.freedesktop.DBus.Properties.Get string:"org.bluez.MediaPlayer1" string:"Position"
I decided to cross check with Windows 10 a2dp sink player, but there the change in track position from phone is immediately reflected to player window.
I tried monitoring dbus signals with:
dbus-monitor --system "type='signal', sender='org.bluez'"
And this was the output:
dbus-monitor: unable to enable new-style monitoring: org.freedesktop.DBus.Error.AccessDenied: "Rejected send message, 1 matched rules; type="method_call", sender=":1.246" (uid=1000 pid=8991 comm="dbus-monitor --system type='signal', sender='org.b" label="unconfined") interface="org.freedesktop.DBus.Monitoring" member="BecomeMonitor" error name="(unset)" requested_reply="0" destination="org.freedesktop.DBus" (bus)". Falling back to eavesdropping.
signal time=1555847560.121045 sender=org.freedesktop.DBus -> destination=:1.246 serial=2 path=/org/freedesktop/DBus; interface=org.freedesktop.DBus; member=NameAcquired
string ":1.246"
signal time=1555847570.624003 sender=:1.161 -> destination=(null destination) serial=32467 path=/org/bluez/hci0/dev_44_C3_46_7B_2D_C7/fd1; interface=org.freedesktop.DBus.Properties; member=PropertiesChanged
string "org.bluez.MediaTransport1"
array [
dict entry(
string "State"
variant string "pending"
)
]
array [
]
signal time=1555847570.625959 sender=:1.161 -> destination=(null destination) serial=32469 path=/org/bluez/hci0/dev_44_C3_46_7B_2D_C7/fd1; interface=org.freedesktop.DBus.Properties; member=PropertiesChanged
string "org.bluez.MediaTransport1"
array [
dict entry(
string "State"
variant string "active"
)
]
array [
]
signal time=1555847570.662250 sender=:1.161 -> destination=(null destination) serial=32470 path=/org/bluez/hci0/dev_44_C3_46_7B_2D_C7/player0; interface=org.freedesktop.DBus.Properties; member=PropertiesChanged
string "org.bluez.MediaPlayer1"
array [
dict entry(
string "Status"
variant string "playing"
)
]
array [
]
signal time=1555847570.723943 sender=:1.161 -> destination=(null destination) serial=32471 path=/org/bluez/hci0/dev_44_C3_46_7B_2D_C7/player0; interface=org.freedesktop.DBus.Properties; member=PropertiesChanged
string "org.bluez.MediaPlayer1"
array [
dict entry(
string "Position"
variant uint32 148405
)
]
array [
]
Which confirms that PropertiesChanged signal is fired only when actions such Play/Pause/Next/Previous are executed, and not when position is changed on phone.
Can I make BlueZ fire the PropertiesChanged signal whenever I want, probably using BlueZ api itself?
Related
complex_result = ['In the 2000s , operating systems such as ncp1 Android flourished for mobile ncp2 known', 'as apps became commonplace']
res_dict = {'rep_sentence': 'In the 2000s, operating systems such as ncp1 Android flourished for mobile ncp2 known as apps became commonplace.', 'replacements': [{'replacedPhrase': 'Apple iOS and Google', 'replacement': 'ncp1'}, {'replacedPhrase': 'devices, and programs', 'replacement': 'ncp2'}]}
for each_rep in res_dict['replacements']:
res = [masked_np.replace(each_rep['replacement'],each_rep['replacedPhrase']) if
each_rep['replacement'] in masked_np else masked_np for masked_np in
complex_result]
print("res>>",res)
Actual output = ['In the 2000s , operating systems such as ncp1 Android flourished for mobile devices, and programs known', 'as apps became commonplace']
Expected output = ['In the 2000s , operating systems such as Apple iOS and Google Android flourished for mobile devices, and programs known.']
As you are looping over each of the replacements, you are overwriting your value of res. This means that each loop, when you are doing that replacement, you are throwing away the previous replacement.
Also, you don't need to check if each_rep['replacement'] in masked_np before doing the .replace(). If the string is not there, then .replace() will not do anything.
A different way to do this could be to use a combination of map() and functools.reduce() (in python 2, this was just reduce().
from functools import reduce
complex_result = ['In the 2000s , operating systems such as ncp1 Android flourished for mobile ncp2 known', 'as apps became commonplace']
res_dict = {'rep_sentence': 'In the 2000s, operating systems such as ncp1 Android flourished for mobile ncp2 known as apps became commonplace.', 'replacements': [{'replacedPhrase': 'Apple iOS and Google', 'replacement': 'ncp1'}, {'replacedPhrase': 'devices, and programs', 'replacement': 'ncp2'}]}
res = map(lambda v: reduce(lambda s, r: s.replace(r['replacement'], r['replacedPhrase']), res_dict['replacements'], v), complex_result)
print("res>>", list(res))
Try it online!
To fix your current code, you need to make sure you are running the .replace() on the updated string/list from each loop. Something like this:
complex_result = ['In the 2000s , operating systems such as ncp1 Android flourished for mobile ncp2 known', 'as apps became commonplace']
res_dict = {'rep_sentence': 'In the 2000s, operating systems such as ncp1 Android flourished for mobile ncp2 known as apps became commonplace.', 'replacements': [{'replacedPhrase': 'Apple iOS and Google', 'replacement': 'ncp1'}, {'replacedPhrase': 'devices, and programs', 'replacement': 'ncp2'}]}
res = complex_result.copy()
for each_rep in res_dict['replacements']:
res = [masked_np.replace(each_rep['replacement'], each_rep['replacedPhrase']) for masked_np in res]
print("res>>",res)
Try it online!
Basically:
complex_result = ''.join(complex_result[0]) #convert list to string
for i in res_dict['replacements']:
complex_result = complex_result.replace(i['replacement'], i['replacedPhrase'])
complex_result = complex_result.split(maxsplit=0)
Explanation:
First we convert list to string using the join method
join is method for string that take iterable as parameter and make it string and add text between every iterable , example:
x = ['Dog', 'Cat']
print('and'.join(x)) #Result : DogandCat
ops we forgot to add space, lets add space:
print(' and '.join(x))` #Result: Dog and Cat
So when we use ''.join() it will just convert it to string More Info
Then we iterable the res_dict with 'replacements' key
so it will iterable this :
{'replacedPhrase': 'Apple iOS and Google', 'replacement': 'ncp1'}
{'replacedPhrase': 'devices, and programs', 'replacement': 'ncp2'}
than we just replace 'replacement' with 'replacedPhrase', so first it will replace 'ncp1' with 'Apple iOS and Google'
Finally we use split method with maxsplit=0 to convert string to list
maxsplit parameter is the number of how many words will iterable and append them to list ? 'a b c d'.split(maxsplit=1) this will append a only and "extend" bcd into one sentence ['a','b c d'] its determine by spaces, by default maxsplit equal to -1 it mean all the words, example:
'cat is faster than dog'.split() #result: ['cat', 'is', 'faster', 'than', 'dog']
'cat is faster than dog'.split(maxsplit=2) #result: ['cat', 'is', 'faster than dog']`
More Info
I'm using nRF Connect for Android to test a BLE peripheral. The peripheral is a BSX Insight residual muscle oxygen monitor whose software application is no longer functional or supported by the manufacturer. Thus, my only option to use my device (BSX) is to write my own control software. I've written a Python 3.7 script that I run within a tkinter routine on my 64-bit Win 10 laptop. Also, I'm using the Pygatt library and a BLED112 BT dongle.
I can connect to the peripheral, read and write values just fine to characteristics, but I'm sure that the "conversion" from the process used in nRF Connect and to my script is incomplete and inefficient. So the first thing I'd like to confirm is that the correct respective functions from Pygatt are used. Once I'm using the correct functions from Pygatt, then I can compare respective outputs for the two data (characteristic values) streams that I want to capture and store.
The basic process in nRF Connect:
1. scan
2. select/connect the BSX Insight
3. expose the service and characteristics of interest
4. enable CCCDs
5. write the "start data" values (04-02)
These are the process command results from the nRF Connect log file. Starting with number four:
4.
D 09:04:54.491 gatt.setCharacteristicNotification(00002a37-0000-1000-8000-00805f9b34fb, true) 11
D 09:04:54.496 gatt.setCharacteristicNotification(2e4ee00b-d9f0-5490-ff4b-d17374c433ef, true) 20x
D 09:04:54.499 gatt.setCharacteristicNotification(2e4ee00d-d9f0-5490-ff4b-d17374c433ef, true) 25x
D 09:04:54.516 gatt.setCharacteristicNotification(2e4ee00e-d9f0-5490-ff4b-d17374c433ef, true) 32x
D 09:04:54.519 gatt.setCharacteristicNotification(00002a63-0000-1000-8000-00805f9b34fb, true) 36
D 09:04:54.523 gatt.setCharacteristicNotification(00002a53-0000-1000-8000-00805f9b34fb, true) 40
The above resulted from using the nRF command "Enable CCCDs." Basically every characteristic that could be enabled was enabled which is fine. The 'x' are the three that I need enabled. The others are extra. Note, I've annotated the respective handles for these UUIDs on the end of the line.
V 09:05:39.211 Writing command to characteristic 2e4ee00a-d9f0-5490-ff4b-d17374c433ef
D 09:05:39.211 gatt.writeCharacteristic(2e4ee00a-d9f0-5490-ff4b-d17374c433ef, value=0x0402)
I 09:05:39.214 Data written to 2e4ee00a-d9f0-5490-ff4b-d17374c433ef, value: (0x) 04-02
A 09:05:39.214 "(0x) 04-02" sent
Number five is where I write 0402 to the UUID above. This action sends the data/value streams from:
2e4ee00d-d9f0-5490-ff4b-d17374c433ef, with a descriptor handle 26
2e4ee00e-d9f0-5490-ff4b-d17374c433ef, with a descriptor handle 33
Once I've done the basic steps above in nRF Connect, the two characteristic value streams become active, and I can immediately see the converted values in my Garmin Edge 810 head unit.
So attempting to duplicate the same process within my tkinter snippet:
# this function fires from the 'On' button click event
def powerON():
powerON_buttonevent = 1
print(f"\tpowerON_buttonevent OK {powerON_buttonevent}")
# Connect to the BSX Insight
try:
adapter = pygatt.BGAPIBackend() # serial_port='COM3'
adapter.start()
device = adapter.connect('0C:EF:AF:81:0B:76', address_type=pygatt.BLEAddressType.public)
print(f"\tConnected: {device}")
except:
print(f"BSX Insight connection failure")
finally:
# adapter.stop()
pass
# Enable only these CCCDs
try:
device.char_write_handle(21, bytearray([0x01, 0x00]), wait_for_response=True)
device.char_write_handle(26, bytearray([0x01, 0x00]), wait_for_response=True)
device.char_write_handle(33, bytearray([0x01, 0x00]), wait_for_response=True)
print(f"\te00b DESC: {device.char_read_long_handle(21)}") # notifiy e00b
print(f"\te00d DESC: {device.char_read_long_handle(26)}") # notify e00d SmO2
print(f"\te00e DESC: {device.char_read_long_handle(33)}") # notify e00e tHb
# Here's where I tested functions from Pygatt...
# print(f"\t{device.get_handle('UUID_here')}") # function works
# print(f"\tvalue_handle/characteristic_config_handle: {device._notification_handles('UUID_here')}") # function works
# print(f"{device.char_read('UUID_here')}")
# print(f"{device.char_read_long_handle(handle_here)}") # function works
except:
print(f"CCCD write value failure")
finally:
# adapter.stop()
pass
# Enable the data streams
try:
device.char_write('2e4ee00a-d9f0-5490-ff4b-d17374c433ef', bytearray([0x04, 0x02]), wait_for_response=True) # function works
print(f"\te00a Power ON: {device.char_read('2e4ee00e-d9f0-5490-ff4b-d17374c433ef')}")
except:
print(f"e00a Power ON write failure")
finally:
# adapter.stop()
pass
# Subscribe to SmO2 and tHb UUIDs
try:
def data_handler(handle, value):
"""
Indication and notification come asynchronously, we use this function to
handle them either one at the time as they come.
:param handle:
:param value:
:return:
"""
if handle == 25:
print(f"\tSmO2: {value} Handle: {handle}")
elif handle == 32:
print(f"\ttHb: {value} Handle: {handle}")
else:
print(f"\tvalue: {value}, handle: {handle}")
device.subscribe("2e4ee00d-d9f0-5490-ff4b-d17374c433ef", callback=data_handler, indication=False, wait_for_response=True)
device.subscribe("2e4ee00e-d9f0-5490-ff4b-d17374c433ef", callback=data_handler, indication=False, wait_for_response=True)
print(f"\tSuccess 2e4ee00d: {device.char_read('2e4ee00d-d9f0-5490-ff4b-d17374c433ef')}")
print(f"\tSuccess 2e4ee00e: {device.char_read('2e4ee00e-d9f0-5490-ff4b-d17374c433ef')}")
# this statement causes a run-on continuity when enabled
# while True:
# sleep(1)
except:
print("e00d/e00e subscribe failure")
finally:
adapter.stop()
# pass
Problem: in the output window of my Atom editor, the two data streams start as expected. For example:
I 09:05:39.983 Notification received from 2e4ee00d-d9f0-5490-ff4b-d17374c433ef, value: (0x) 00- 00-00-00-C0-FF-00-00-C0-FF-84-65-B4-3B-9E-AB-83-3C-FF-03
and...
I 09:05:39.984 Notification received from 2e4ee00e-d9f0-5490-ff4b-d17374c433ef, value: (0x) 1C-00-00-FF-03-FF-0F-63-00-00-00-00-00-00-16-32-00-00-00-00
I'll see about seven to ten lines of data before the "stream" stops. There'll be a gap of about 20 seconds, and then a big dump of values. This is different from the output from nRF Connect, which is immediate and continous.
I have the logs from nRF Connect and Python...but I'm not sure which log entry points to the cause of the stop. Might this issue be related to the Peripheral Preferred Connection Parameters? The nRF Connect property read shows:
ConnectionInterval = 50ms~100ms
SlaveLatency = 1
SuperTimeoutMonitor = 200
The Python log entry shows this:
INFO:pygatt.backends.bgapi.bgapi:Connection status: handle=0x0, flags=5, address=0xb'760b81afef0c', connection interval=75.000000ms, timeout=1000, latency=0 intervals, bonding=0xff
Thoughts anyone? (And truly, thanks in advance.)
I've answered my questions. I now have to solve the new problem of why my tKinter dialog is "not responding" as a separate issue.
Thanks All
Edit 3/31/2020: I re-wrote the script using pyQt and now have a functional app.
I need to test the framework that can observe the state of some json http resource (I'm simplifying a bit here) and can send information about its changes to message queue so that client of service based on this framework could reconstruct actual state without polling http resourse.
It's easy to formulate properties for such framework. Let say we have a list of triples State, Diff, Timestamp
gen_states = [(gs1, Nothing, t1), (gs2, Just d1-2, t2), (gs3, Just d2-3, t3), (gs4, Just d3-4, t4)]
and after mirroring all this state to the http resource (used as test double) we gathered [rs1, rd1-2, rd2-3] where r stands for received.
apply [rd1-2, rd2-3] rs1 == gs4 final states should be the same the same
Also let's say that polling interval was more than the time difference between changes t3 - t2 than we can loose the diff d2-3 but the state still have to be consisted with state that was at previous polling gs2 for example. So we can miss some changes, but the received state should be consisted with some of the previous states that was no later than one polling interval before.
The question is how to create a generator that generates random diffs for json resource, given that resource is always an array of objects that all have id key.
For example initial state could look like that
[
{"id": "1", "some": {"complex": "value"}},
{"id": "2", "other": {"simple": "value"}}
]
And the next state
[
{"id": "1", "some": {"complex": "value"}},
{"id": "3", "other": "simple_value"}
]
Which should make diff like
type Id = String
data Diff = Diff {removed :: [Id], added :: [(Id, JsonValue)]}
added = [aesonQQ| {"id": 3, "other": "simple_value"} |]
Diff [2] [added]
I've tried to derive Arbitrary for aeson Object, but got this
<interactive>:15:1: warning: [-Wmissing-methods]
• No explicit implementation for
‘arbitrary’
• In the instance declaration for
‘Arbitrary
(unordered-containers-0.2.8.0:Data.HashMap.Base.HashMap
Data.Text.Internal.Text Value)’
But even if I would accomplished that how would I specify that added should have new unique id?
I'm developing a GStreamer plugin following the GStreamer Plugin Writer's Guide and using gst-element-maker from the gst-plugins-bad repository with the base class set to basetransform. As a starting point I have developed a plugin named MyFilter that simply passes the data along the chain. The plugin is working, but when I run gst-launch with the debug level set to 2, I get the following error:
alsa gstalsa.c:124:gst_alsa_detect_formats: skipping non-int format.
I am executing the command:
gst-launch --gst-debug-level=2 --gst-plugin-load=./src/libgstmyfilter.la filesrc location=./song.mp3 ! flump3dec ! audioconvert ! audioresample ! myfilter ! alsasink
From the base class that was created by gst-element-maker I have removed calls to gst_pad_new_from_static_template() because the calls were returning errors reporting that the sink and source pad were already created, I have set the chain function using gst_pad_set_chain_function(), have implemented the function gst_myfilter_transform_caps(), and have added code to handle the GST_EVENT_NEWSEGMENT event. The STATIC_CAPS string I am using for source and sink are:
"audio/x-raw-int, "
"rate = (int) { 16000, 32000, 44100, 48000 }, "
"channels = (int) [ 1, 2 ], "
"endianness = (int) BYTE_ORDER, "
"signed = (boolean) true, "
"width = (int) 16, "
"depth = (int) 16"
I return the caps from gst_myfilter_transform_caps() using gst_pad_get_fixed_caps_func(GST_BASE_TRANSFORM_SRC[[/SINK]]_PAD(trans)). The pad caps are set using the default code created by gst-element-maker in gst_myfilter_base_init() using:
gst_element_class_add_pad_template(element_class, gst_static_pad_template_get(&gst_myfilter_sink_template));
Is there a problem with the GstBaseTransform class? I have another custom filter which does not use the GstBaseTransform class and does not have this problem. I am using GStreamer v0.10.36 with Ubuntu 12.04.
Using Two ASFWriter Filters in a graph.One is making wmv file,
Anather is for live streaming.
Carrying out streaming,
When changing a file name, Recording Start is overdue for 3 seconds.
so,The head of a New WMV is missing.
It's troubled.
CAMERA ------ InfTee Filter --- --- AsfWriter Filter → WMV FIle
X
Microphone --- InfTee Filter2 --- --- AsfWriter Filter2 → Live Streaming
void RecStart()
{
...
ConnectFilters(pInfTee,"Infinite Pin Tee Filter(1)",L"Output1",pASFWriter,"ASFWriter",L"Video Input 01"));
ConnectFilters(pInfTee,"Infinite Pin Tee Filter(2)",L"Output2",pASFWriter2,"ASFWriter",L"Video Input 01"));
ConnectFilters(pSrcAudio,"Audio Source",L"Capture",pInfTee2,"Infinite Pin Tee Filter",L"Input"));
ConnectFilters(pInfTee2,"Infinite Pin Tee Filter(1)A",L"Output1",pASFWriter,"ASFWriter",L"Audio Input 01"));
ConnectFilters(pInfTee2,"Infinite Pin Tee Filter(2)A",L"Output2",pASFWriter2,"ASFWriter",L"Audio Input 01"));
pASFWriter2->QueryInterface(IID_IConfigAsfWriter,(void**)&pConfig);
pConfig->QueryInterface(IID_IServiceProvider,(void**)&pProvider);
pProvider->QueryService(IID_IWMWriterAdvanced2, IID_IWMWriterAdvanced2, (void**)&mpWriter2);
mpWriter2->SetLiveSource(TRUE);
mpWriter2->RemoveSink(0);
WMCreateWriterNetworkSink(&mpNetSink);
DWORD dwPort = (DWORD)streamingPortNo;
mpNetSink->Open(&dwPort);
mpNetSink->GetHostURL(url, &url_len);
hr =mpWriter2->AddSink(mpNetSink);
pGraph->QueryInterface(IID_IMediaEventEx,(void **)&pMediaIvent);
pMediaIvent->SetNotifyWindow((OAHWND)this->m_hWnd,WM_GRAPHNOTIFY,0);
pGraph->QueryInterface(IID_IMediaControl,(void **)&pMediaControl);
pMediaControl->Run();
}
void OnTimer()
{
pMediaControl->Stop();
CComQIPtr<IFileSinkFilter,&IID_IFileSinkFilter> pIFS = pASFWriter;
pIFS->SetFileName(NewFilename,NULL);
pMediaControl->Run();
}
---------------------------------------------------------------------------
→ I think ... In order to wait for starting of streaming,
it is missing for 3 seconds in head of New WMV File.
Are there any measures?
---------------------------------------------------------------------------
When you restart the graph, you inevitably miss a fragment of data due to initialization overhead. And, it is impossible to switch files without stopping the graph. The solution is to use multiple graphs and keep capturing while the part with file writing is being reinitialized.
See DirectShow Bridges for a typical solution addressing this problem.