Create a connection coming from a `spike_source_cell` in Arbor? - arbor-simulator

The docs specify that in order to create a connection a source and dest are required (of type cell_global_label and cell_local_label respectively). For connections between cable cells this works fine because you can place labels on their decor and then use those labels in the cell_global_label, but how do I connect from a spike_source_cell?
Here's what I do for cable cells:
arbor.connection(
arbor.cell_global_label(gid, "soma_spike_detector"),
arbor.cell_local_label("soma_synapse"),
1,
0.1
)
But since I can't create labels on a spike_source_cell it throws the following error:
RuntimeError: Model building error on cell 26: connection endpoint label "soma_spike_detector": label does not exist.

The docs on spike source cells mention:
has one built-in source, which needs to be given a label to be used when forming connections from the cell;
So you can use the label that you gave when constructing spike_source_cells as the label when constructing the cell_global_label:
# When constructing the source cell
arbor.spike_source_cell(
"spike_source",
arbor.explicit_schedule([5, 10, 12])
)
# In the recipe's `connections_on`:
arbor.connection(
arbor.cell_global_label(gid, "spike_source"),
arbor.cell_local_label("soma_synapse"),
1,
0.1
)

Related

Octave boxwidth does not recognise core figure properties

I am trying to use the boxplot command in the statistics package, and it seems like most of the plot options are not recognised by Octave, by which I mean calling options like "BoxWidth" results in the following error:
error: set: unknown line property BoxWidth
error: __go_line__: unable to create graphics handle
error: called from
__plt__>__plt2vv__ at line 495 column 10
__plt__>__plt2__ at line 242 column 14
__plt__ at line 107 column 18
The code snippet producing this is as follows with the note that I have tried lower, upper,, camel, and sentence case for "BoxWidth" (documentation specifies camel case) and that I have tried both quotation marks and apostrophes to mark out the properties and the property options, with the same error produced in each case.
groups = [g_1, g_2, g_3, g_4, g_5, g_6, g_7, g_8, g_9, g_10, g_11];
data = [day_1_seat, day_2_seat, day_3_seat, day_4_seat, day_5_seat, ...
day_6_seat, day_7_seat, day_8_seat, day_9_seat, day_10_seat, ...
day_11_seat];
labels = {"29/07", "04/08", "05/08", "06/08", "07/08", "09/08", "11/08",...
"12/08", "13/08", "28/08", "01/09"};
s = boxplot(data,groups, "Notch", 0, "Symbol",".", "BoxWidth", "fixed");
The nature of the data in "groups" and "data" is unimportant, as I can create the boxplot without specifying properties without any issue. I have also tried specifying plot options after the initial call to boxplot with no luck.
This issue also occurs with other properties, such as Labels, OutlierTags etc, but not with "Notch" or "Symbol". I'm not a novice user, but I cannot figure out what the issue is here, any advice would be greatly appreciated!

Coupling of Different Blocks in a UNET

I am starting to work with Neuralnetworks using Keras. I try to adapt the model (UNet-like architecture) given by Sim, Oh, Kim, Jung in "Optimal Transport driven CycleGAN for Unsupervised Learning in Inverse Problems" (Fig. 10).
def def_generator(image_shape=(256,256,3)):
init= RandomNormal(stddev=0.02)
#Start 1st Block
in_image = Input(shape=image_shape)
g1=Conv2D(64,(3,3))(in_image)
g1=InstanceNormalization(axis=-1)(g1)
g1=LeakyReLU(alpha=0.2)(g1)
g1=Conv2D(64,(3,3))(g1)
g1=InstanceNormalization(axis=-1)(g1)
g1=LeakyReLU(alpha=0.2)(g1)
#End of 1st Block
#Start of 2nd Block
g2=MaxPool2D()(g1)
g2=Conv2D(128,(3,3))(g2)
g2=InstanceNormalization(axis=-1)(g2)
g2=LeakyReLU(alpha=0.2)(g2)
g2=Conv2D(128,(3,3))(g2)
g2=InstanceNormalization(axis=-1)(g2)
g2=LeakyReLU(alpha=0.2)(g2)
#End of 2nd Block
#Start of 3rd Block
g3=MaxPool2D()(g2)
g3=Conv2D(256,(3,3))(g3)
g3=InstanceNormalization(axis=-1)(g3)
g3=LeakyReLU(alpha=0.2)(g3)
g3=Conv2D(256,(3,3))(g3)
g3=InstanceNormalization(axis=-1)(g3)
g3=LeakyReLU(alpha=0.2)(g3)
#End of 3rd Block
#Start of 4th block
g4=MaxPool2D()(g3)
g4=Conv2D(512,(3,3))(g4)
g4=InstanceNormalization(axis=-1)(g4)
g4=LeakyReLU(alpha=0.2)(g4)
g4=Conv2D(512,(3,3))(g4)
g4=InstanceNormalization(axis=-1)(g4)
g4=LeakyReLU(alpha=0.2)(g4)
g4=Conv2D(256,(3,3))(g4)
g4=InstanceNormalization(axis=-1)(g4)
g4=LeakyReLU(alpha=0.2)(g4)
g4=Conv2DTranspose(256,(2,2),strides=(4,4),output_padding=1)(g4)
#End of 4th Block
#Start of 5th Block
g5input=Concatenate()([g4,g3])
g5=Conv2D(256,(3,3))(g5input)
g5=InstanceNormalization(axis=-1)(g5)
g5=LeakyReLU(alpha=0.2)(g5)
g5=Conv2D(256,(3,3))(g5)
g5=InstanceNormalization(axis=-1)(g5)
g5=LeakyReLU(alpha=0.2)(g5)
g5=Conv2DTranspose(128,(2,2),strides=(3,3), padding='same', output_padding=0)(g5)
#End of 5th Block
#Start of 6th block
g6input=Concatenate()([g5,g2])
g6=Conv2D(128,(2,2))(g6input)
g6=InstanceNormalization(axis=-1)(g6)
g6=LeakyReLU(alpha=0.2)(g6)
g6=Conv2D(128,(2,2))(g6)
g6=InstanceNormalization(axis=-1)(g6)
g6=LeakyReLU(alpha=0.2)(g6)
g6=Conv2DTranspose(64,(2,2),strides=(2,2), padding='valid', output_padding=1)(g6)
#End of 6th Block
#Start of 7th block
g7input=Concatenate()([g6,g1])
g7=Conv2D(64,(2,2))(g7input)
g7=InstanceNormalization(axis=-1)(g7)
g7=LeakyReLU(alpha=0.2)(g7)
g7=Conv2D(64,(2,2))(g7)
g7=InstanceNormalization(axis=-1)(g7)
g7=LeakyReLU(alpha=0.2)(g7)
g7=Conv2DTranspose(1,(1,1))(g7)
model=Model(in_image, g5)
model.compile(loss='mse', optimizer=Adam(lr=2e-4,beta_1=0.5), loss_weights=[0.5], metrics=['accuracy'])
return model
g=def_generator((120,120,1))
print(g.summary())
I run always in the problem that the dimensions of the layers which should be concatenated are not compatible.
I understand that this issue is resulting from the MaxPooling+Conv2d steps before.
I am now wondering if there is a trick/strategy to avoid/reduce this issue?
Any help will be appreciated.
Best wishes
Michael
the problem is very simple, you are concatenating block with layers with different size, this is happening because you are trying to run the network on images that are NOT POWER OF 2 size, when you do the max pooling of an image that is not divisible for 2 you lose a pixel (243x243 -> 121x121) and when you double with the traspose you get a different size (121x121 -> 242x242) and the concatenation doesnt work because 242 is different to 243, the images are of different size (at least this is what i think, you should have shared the error).
This means that when an image reaches a maxpooling layer it needs to have an edge divisible for 2.
so, solution:
having 4 blocks means that the images need to be AT LEAST divisible for 16, otherwise it will not work

Python Value Error: Duplicated level name: "variable", assigned to level 1, is already used for level 0

I am using this below code for multiple charts visualization:
data.groupby(['y', 'y']).size().unstack().plot(kind='bar', stacked=True, ax=plt.subplot(6,2,2+1),figsize=(15,25))
But getting the following error:
Duplicated level name: "y", assigned to level 1, is already used for level 0.
whereas if I writing the same above code for other variable its working for example:
data.groupby(['otherObj', 'y']).size().unstack().plot(kind='bar', stacked=True, ax=plt.subplot(6,2,2+1),figsize=(15,25))
How can the above error be sorted?

Deriving boolean expressions from hand drawn logic gate diagrams with python OpenCv

Using tensorflow I identified all the gates, letters and nodes.
When identifying all above components, it draws a rectangle box around each component.
Therefore, in the following array it contains list of all the components detected. Each list is in the following order.
Name of component
X, Y coordinates of top left corner of rectangle
X, Y coordinates of right bottom of rectangle
Nodes (Black Points) are used to indicate a bridge over a line without crossing.
Array of all above components.
labels =[['NODE',(1002.9702758789062, 896.4686675071716), (1220.212585389614, 1067.1142654418945)], ['NODE',(1032.444071739912, 635.7160077095032),(1211.6839590370655, 763.4382424354553)],['M', (57.093908578157425,607.6229677200317),(311.9765570014715,833.807623386383)],['NODE', (344.5295810997486, 806.3690414428711), (501.8982524871826, 930.6454839706421)], ['Z', (21.986433800309896, 1327.9791088104248), (266.36098374426365, 1565.158670425415)], ['OR', (476.0066536962986, 574.401759147644), (918.3125713765621, 1177.1423168182373)], ['NODE', (333.50814148783684, 1058.0092916488647), (497.6142471432686, 1202.9034795761108)], ['K', (37.06201596558094, 870.0414619445801), (311.77860628068447, 1105.8665227890015)], ['AND', (665.9987451732159, 1227.940999031067), (1062.7052736580372, 1594.6843948364258)],['AND', (1373.9987451732159, 204.940999031067), (1703.7052736580372, 612.6843948364258)], ['NOT', (694.2882044911385, 260.5083291530609), (1027.812717139721, 450.35294365882874)], ['XOR', (2027.6711627840996, 593.0362477302551), (2457.9011510014534, 1093.9836854934692)], ['J', (85.69029207900167, 253.8458535671234), (334.48535946011543, 456.5887498855591)], ['OUTPUT', (2657.3825285434723, 670.8418045043945), (2929.8974316120148, 975.4852895736694)]]
Then, using line detection algorithm I identified all the 17 lines connecting each component.
All above 17 lines take as a list of arrays, and those lines are not in a correct order and each line has 2 end points.
lines = [[(60, 1502), (787, 1467)], [(125, 1031), (691, 988)], [(128, 772), (685, 758)], [(131, 336),(709,347)], [(927,350),(1455, 348)], [(400, 1361), (792, 1369)], [(834, 843), (2343, 939)], [(915, 1430), (1119, 1424)], [(1125, 468), (1453, 470)], [(1587, 399), (1911, 405)], [(1884, 755), (2245, 814)],[(2372, 831), (2918, 859)], [(1891, 397), (1901, 767)], [(1138, 457), (1128, 738)], [(441, 738), (421, 903)], [(1125, 946), (1101, 1437)], [(420, 1098), (408, 1373)]]
When connecting those lines, it is must to consider following scenario.
It means nodes are used to indicate a bridge without crossing lines.
M is an input to the AND gate and (M.Z) is an input to the other AND gate.
So how can I generate the following Boolean Expression using above 2 arrays and above scenario? This should be a function that works for all logic gates.
It can be assumed that image is always going to read from left to right.

linearK error in seq. default() cannot be NA, NaN

I am trying to learn linearK estimates on a small linnet object from the CRC spatstat book (chapter 17) and when I use the linearK function, spatstat throws an error. I have documented the process in the comments in the r code below. The error is as below.
Error in seq.default(from = 0, to = right, length.out = npos + 1L) : 'to' cannot be NA, NaN or infinite
I do not understand how to resolve this. I am following this process:
# I have data of points for each data of the week
# d1 is district 1 of the city.
# I did the step below otherwise it was giving me tbl class
d1_data=lapply(split(d1, d1$openDatefactor),as.data.frame)
# I previously create a linnet and divided it into districts of the city
d1_linnet = districts_linnet[["d1"]]
# I create point pattern for each day
d1_ppp = lapply(d1_data, function(x) as.ppp(x, W=Window(d1_linnet)))
plot(d1_ppp[[1]], which.marks="type")
# I am then converting the point pattern to a point pattern on linear network
d1_lpp <- as.lpp(d1_ppp[[1]], L=d1_linnet, W=Window(d1_linnet))
d1_lpp
Point pattern on linear network
3 points
15 columns of marks: ‘status’, ‘number_of_’, ‘zip’, ‘ward’,
‘police_dis’, ‘community_’, ‘type’, ‘days’, ‘NAME’,
‘DISTRICT’, ‘openDatefactor’, ‘OpenDate’, ‘coseDatefactor’,
‘closeDate’ and ‘instance’
Linear network with 4286 vertices and 6183 lines
Enclosing window: polygonal boundary
enclosing rectangle: [441140.9, 448217.7] x [4640080, 4652557] units
# the errors start from plotting this lpp object
plot(d1_lpp)
"show.all" is not a graphical parameter
Show Traceback
Error in plot.window(...) : need finite 'xlim' values
coords(d1_lpp)
x y seg tp
441649.2 4649853 5426 0.5774863
445716.9 4648692 5250 0.5435492
444724.6 4646320 677 0.9189631
3 rows
And then consequently, I also get error on linearK(d1_lpp)
Error in seq.default(from = 0, to = right, length.out = npos + 1L) : 'to' cannot be NA, NaN or infinite
I feel lpp object has the problem, but I find it hard to interpret the errors and how to resolve them. Could someone please guide me?
Thanks
I can confirm there is a bug in plot.lpp when trying to plot the marked point pattern on the linear network. That will hopefully be fixed soon. You can plot the unmarked point pattern using
plot(unmark(d1_lpp))
I cannot reproduce the problem with linearK. Which version of spatstat are you running? In the development version on my laptop spatstat_1.51-0.073 everything works. There has been changes to this code recently, so it is likely that this will be solved by updating to development version (see https://github.com/spatstat/spatstat).

Resources