Can I route Node A into Node B, and Node B back into Node A (of course using a Mixer in between) -- otherwise called "Feedback"? (For example, WebAudio supports this).
No, trying to setup a recursive route will result in AVAudioEngine freezing and a seemingly unrelated error appearing in the console:
warning: could not execute support code to read Objective-C class data in the process. This may reduce the quality of type information available.
Related
I am using Python 3.8.10 and fabric 2.7.0.
I have a Connection to a remote host. I am executing a command such as follows:
resObj = connection.run("cat /usr/bin/binaryFile")
So in theory the bytes of /usr/bin/binaryFile are getting pumped into stdout but I can not figure out what wizardry is required to get them out of resObj.stdout and written into a file locally that would have a matching checksum (as in, get all the bytes out of stdout). For starters, len(resObj.stdout) !== binaryFile.size. Visually glancing at what is present in resObj.stdout and comparing to what is in /usr/bin/binaryFile via hexdump or similar makes them seem about similar, but something is going wrong.
May the record show, I am aware that this particular example would be better accomplished with...
connection.get('/usr/bin/binaryFile')
The point though is that I'd like to be able to get arbitrary binary data out of stdout.
Any help would be greatly appreciated!!!
I eventually gave up on doing this using the fabric library and reverted to straight up paramiko. People give paramiko a hard time for being "too low level" but the truth is that it offers a higher level API which is pretty intuitive to use. I ended up with something like this:
with SSHClient() as client:
client.set_missing_host_key_policy(AutoAddPolicy())
client.connect(hostname, **connectKwargs)
stdin, stdout, stderr = client.exec_command("cat /usr/bin/binaryFile")
In this setup, I can get the raw bytes via stdout.read() (or similarly, stderr.read()).
To do other things that fabric exposes, like put and get it is easy enough to do:
# client from above
with client.open_sftp() as sftpClient:
sftpClient.put(...)
sftpClient.get(...)
also was able to get the exit code per this SO answer by doing:
# stdout from above
stdout.channel.recv_exit_status()
The docs for recv_exit_status list a few gotchas that are worth being aware of too. https://docs.paramiko.org/en/latest/api/channel.html#paramiko.channel.Channel.recv_exit_status .
Moral of the story for me is that fabric ends up feeling like an over abstraction while Paramiko has an easy to use higher level API and also the low level primitives when appropriate.
I am a new mainframer and I have been given access to/control of a test system to play around in and learn. We have been trying to get IMS set up on the system but when I try to log into IMS 14 I get the error
"INIT SELF FAILED WITH SENSE 08570002".
I have found that the error code means, "The SSCP-PLU session is inactive."
I am thinking that the issue is with the VTAM configuration but I am not sure what exactly needs to be fixed or where in z/OS to look for it.
I have asked around and dug through documentation with no luck so any help would be very much appreciated.
The message indicates an attempt was made to establish a connection from the SSCP (VTAM) and a Primary LU (an application) and the application was not available. This is done on behalf of an SLU (secondary logical unit) which is generally a terminal or printer.
This could the result of several situations but here are some common ones:
An attempt was made to log on to something like TSO, CICS, IMS, ... before the VTAM ACB was actually opened. You can attempt the request again later when the service is up
To determine if the PLU (application is available) use the the VTAM command D NET,ID=vtamappl where vtamappl is the application ID your are trying to connect to. This command is entered on the console directly or through a secondary means like SDSF.
There may be a LOGAPPL= statement coded on the LU definition that tells VTAM to attempt to initiate a session when starting the LU. In your case this would appear to be happening before the PLU (application) is up. The LU definition (or generic definition) is in the VTAMLST concatenation.
This manual describes the sense code in more detail.
I need to instantiate the recent version of the ICache in ROCKET-CHIP project stand-alone. I was able to test this instantiation using 6 months old version. However, I am facing troubles with its 'mem' port in the recent version:
val node = TLClientNode(TLClientParameters(sourceId = IdRange(0,1)))
.....
val mem = outer.node.bundleOut
According to my understanding, ROCKET-CHIP project started to use special type of nodes where both SOURCE and SINK nodes shall be connected on a bar using 'TLXbar' class. I tried to follow the code in http://stackissue.com/ucb-bar/rocket-chip/tilelink2-245.html but it seem obsolete. Can anyone point to me how can I connect this port?
Recently I successfully created a trivial TileLink2 node (just passing input to output with some masks) and inserted it between l1backend.node and TileNetwork.masterNodes.head. So I think my experience might be helpful.
Rocket-chip's diplomacy package extends chisel's Module hierarchy. It mainly consists two parts: LazyModule and LazyModuleImp, where LazyModuleImp is the real Module in chisel world.
Nodes are always created in LazyModule, while node.bundleIn/Out should be referenced inside LazyModuleImpl. We should use nodes in LazyModule to interconnect with each other by :=.
Another thing that might be helpful is that inside LazyModuleImp we can only reference bundleIn/Out in IO bundles from nodes that directly belong to the corresponding LazyModule.
For example, if you have a sub lazy module of XXXCrossing which contains a node. You'd better not use its bundleIn/Out as your current lazy module's IO bundles. Otherwise, the chisel code might successfully get compiled but the firrtl result contains undeclared symbols.
I want to use Google Chrome's IndexedDB to persist data on the client-side.
Idea is to access the IndexedDB outside of chrome, via Node.JS, later on.
The background is the idea to track usage behaviour locally and store the collected data on the client for later analysis without a server backend.
From my understanding, the indexedDB is implemented as a LevelDB. However, I cannot open the levelDB with any of the tools/libs like LevelUp/LevelDown or leveldb-json.
I'm always getting this error message:
leveldb-dump-to-json --file test.json --db https_www.reddit.com_0.indexeddb.leveldb
events.js:141
throw er; // Unhandled 'error' event
^ OpenError: Invalid argument: idb_cmp1 does not match existing comparator : leveldb.BytewiseComparator
at /usr/local/lib/node_modules/leveldb- json/node_modules/levelup/lib/levelup.js:114:34 Christians-Air:IndexedDB
Can anybody please help? It seems as if the Chrome implementation is somehow special/different.
Keys in leveldb are arbitrary binary sequences. Clients implement comparators to define ordering between keys. The default comparator for leveldb is something equivalent to strncmp. Chrome's comparator for Indexed DB's store is more complicated. If you try and use a leveldb instance with a different comparator than it was created with you'll observe keys in seemingly random order, insertion would be unpredictable or cause corruption - dogs and cats living together, mass hysteria. So leveldb lets you name the comparator (persisted to the database) to help detect and avoid this mistake, which is what you're seeing. Chrome's code names its comparator for Indexed DB "idb_cmp1".
To inspect one of Chrome's Indexed DB leveldb instances outside of chrome you'd need to implement a compatible comparator. The code lives in Chrome's implementation at content/browser/indexed_db/indexed_db_backing_store.cc - and note that there's no guarantee that this is fixed across versions. (Apart from backwards compatibility, of course)
It's implemented and public available on github now
C# example
Python example
maven: https://github.com/hnuuhc/often-utils
code:
Map<String, String> storages = LocalStorage.home().getForDomain("pixiv.net");
I need to make an old Linux box running 2.6.12.1 kernel communicate with an older computer that is using:
ISO 8602 Datagram (connectionless service) 1987 12 15 (1st Edition)
ISO 8073 Class 4 (connection oriented service)
These are using "Inactive Network Layer" subset. (I am pretty sure this means I do not have to worry about routing. The two end points are hitting each other with their mac addresses.)
I have a kernel module that implements the connectionless part. In order to get the connection oriented service operational, what is the best approach? I have been taking the approach of adding in the struct proto_ops .connect, .accept, .listen functions to my existing connectionless driver by referring to the tcp implementation.
Maybe there is a better approach? I am spending a lot of time trying to decide what the tcp code is doing and then deciding if that is relevant to my needs. For example, the Nagle algorithm isn't needed because I don't have small bits of data being transmitted. In addition, there are probably a lot of error recovery and flow control stuff I don't need because I know the data that the two endpoints are transmitting and how frequently they transmit it. My plan is to implement this first with whatever simplistic (if any) packet retransmission, sequencing, etc.. to the point where my wireshark looks similar to the wireshark capture I have from the live system. Then try mine against the real thing and then add in whatever error recovery/retransmit stuff seems necessary. In other words, it is a pain in the rear trying to determine what is the guts of the tcp/stream implementation that I want to copy vs the extra error correction/flow control stuff that I might never need.
I found \net\core\stream.c which says:
* Generic stream handling routines. These are generic for most
* protocols. Even IP. Tonight 8-).
* This is used because TCP, LLC (others too) layer all have mostly
* identical sendmsg() and recvmsg() code.
* So we (will) share it here.
This suggested to me that maybe there might be a simpler stream thingy that I can start from. Can someone recommend a more basic streams driver that I should start from instead of tcp?
Is there any example code that provides a basic stream implementation?
I made a user level library to implement the protocol providing my own versions of open/read/write/select etc. If anyone else cares, you can find me at http://pnwsoft.com
Do not attempt to use openss7. It is a total waste of time.