How to add a bar on a sympy symbol? - string

I want to define sympy symbols such that when I display them I get a bar on them. I tried the following one:
c1bar, c2bar, c3bar, ubar, alphabar = sympy.symbols(r'$\bar{c_1} \bar{c_2} \bar{c_3} \bar{u} \bar{\alpha}$')
I then try to display it:display(alphabar) but I get: $\displaystyle \bar{\alpha}$$
How to fix this?

Just remove the $ sign: SymPy automatically detect if you provided Latex syntax. So your example becomes:
c1bar, c2bar, c3bar, ubar, alphabar = sympy.symbols(r'\bar{c_1} \bar{c_2} \bar{c_3} \bar{u} \bar{\alpha}')
Note that the bar is trying to cover both the symbol and the subscript, but it falls short. As far as I know, \bar only cover a single character. You can write:
c1bar, c2bar, c3bar, ubar, alphabar = sympy.symbols(r'\bar{c}_1 \bar{c}_2 \bar{c}_3 \bar{u} \bar{\alpha}')
Or you can replace \bar with \overline:
c1bar, c2bar, c3bar, ubar, alphabar = sympy.symbols(r'\overline{c_1} \overline{c_2} \overline{c_3} \overline{u} \overline{\alpha}')

Related

kdb/q: How to apply a string manipulation function to a vector of strings to output a vector of strings?

Thanks in advance for the help. I am new to kdb/q, coming from a Python and C++ background.
Just a simple syntax question: I have a string with fields and their corresponding values
pp_str: "field_1:abc field_2:xyz field_3:kdb"
I wrote an atomic (scalar) function to extract the value of a given field.
get_field_value: {[field; pp_str] pp_fields: " " vs pp_str; pid_field: pp_fields[where like[pp_fields; field,":*"]]; start_i: (pid_field[0] ss ":")[0] + 1; end_i: count pid_field[0]; indices: start_i + til (end_i - start_i); pid_field[0][indices]}
show get_field_value["field_1"; pp_str]
"abc"
show get_field_value["field_3"; pp_str]
"kdb"
Now how do I generalize this so that if I input a vector of fields, I get a vector of values? I want to input ("field_1"; "field_2"; "field_3") and output ("abc"; "xyz"; "kdb"). I tried multiple approaches (below) but I just don't understand kdb/q's syntax well enough to vectorize my function:
/ Attempt 1 - Fail
get_field_value[enlist ("field_1"; "field_2"); pp_str]
/ Attempt 2 - Fail
get_field_value[; pp_str] /. enlist ("field_1"; "field_3")
/ Attempt 3 - Fail
fields: ("field_1"; "field_2")
get_field_value[fields; pp_str]
To run your function for each you could project the pp_str variable and use each for the others
q)get_field_value[;pp_str]each("field_1";"field_3")
"abc"
"kdb"
Kdb actually has built-in functionality to handle this: https://code.kx.com/q/ref/file-text/#key-value-pairs
q){#[;x](!/)"S: "0:y}[`field_1;pp_str]
"abc"
q)
q){#[;x](!/)"S: "0:y}[`field_1`field_3;pp_str]
"abc"
"kdb"
I think this might be the syntax you're looking for.
q)get_field_value[; pp_str]each("field_1";"field_2")
"abc"
"xyz"

PACF function in statsmodels.tsa.stattools gives numbers greater than 1 when using ywunbiased?

I have a dataframe which is of length 177 and I want to calculate and plot the partial auto-correlation function (PACF).
I have the data imported etc and I do:
from statsmodels.tsa.stattools import pacf
ys = pacf(data[key][array].diff(1).dropna(), alpha=0.05, nlags=176, method="ywunbiased")
xs = range(lags+1)
plt.figure()
plt.scatter(xs,ys[0])
plt.grid()
plt.vlines(xs, 0, ys[0])
plt.plot(ys[1])
The method used results in numbers greater than 1 for very long lags (90ish) which is incorrect and I get a RuntimeWarning: invalid value encountered in sqrtreturn rho, np.sqrt(sigmasq) but since I can't see their source code I don't know what this means.
To be honest, when I search for PACF, all the examples only carry out PACF up to 40 lags or 60 or so and they never have any significant PACF after lag=2 and so I couldn't compare to other examples either.
But when I use:
method="ols"
# or
method="ywmle"
the numbers are corrected. So it must be the algo they use to solve it.
I tried importing inspect and getsource method but its useless it just shows that it uses another package and I can't find that.
If you also know where the problem arises from, I would really appreciate the help.
For your reference, the values for data[key][array] are:
[1131.130005, 1144.939941, 1126.209961, 1107.300049, 1120.680054, 1140.839966, 1101.719971, 1104.23999, 1114.579956, 1130.199951, 1173.819946, 1211.920044, 1181.27002, 1203.599976, 1180.589966, 1156.849976, 1191.5, 1191.329956, 1234.180054, 1220.329956, 1228.810059, 1207.01001, 1249.47998, 1248.290039, 1280.079956, 1280.660034, 1294.869995, 1310.609985, 1270.089966, 1270.199951, 1276.660034, 1303.819946, 1335.849976, 1377.939941, 1400.630005, 1418.300049, 1438.23999, 1406.819946, 1420.859985, 1482.369995, 1530.619995, 1503.349976, 1455.27002, 1473.98999, 1526.75, 1549.380005, 1481.140015, 1468.359985, 1378.550049, 1330.630005, 1322.699951, 1385.589966, 1400.380005, 1280.0, 1267.380005, 1282.829956, 1166.359985, 968.75, 896.23999, 903.25, 825.880005, 735.090027, 797.869995, 872.8099980000001, 919.1400150000001, 919.320007, 987.4799800000001, 1020.6199949999999, 1057.079956, 1036.189941, 1095.630005, 1115.099976, 1073.869995, 1104.48999, 1169.430054, 1186.689941, 1089.410034, 1030.709961, 1101.599976, 1049.329956, 1141.199951, 1183.26001, 1180.550049, 1257.640015, 1286.119995, 1327.219971, 1325.829956, 1363.609985, 1345.199951, 1320.640015, 1292.280029, 1218.890015, 1131.420044, 1253.300049, 1246.959961, 1257.599976, 1312.410034, 1365.680054, 1408.469971, 1397.910034, 1310.329956, 1362.160034, 1379.319946, 1406.579956, 1440.670044, 1412.160034, 1416.180054, 1426.189941, 1498.109985, 1514.680054, 1569.189941, 1597.569946, 1630.73999, 1606.280029, 1685.72998, 1632.969971, 1681.550049, 1756.540039, 1805.810059, 1848.359985, 1782.589966, 1859.449951, 1872.339966, 1883.949951, 1923.569946, 1960.22998, 1930.6700440000002, 2003.369995, 1972.290039, 2018.050049, 2067.560059, 2058.899902, 1994.9899899999998, 2104.5, 2067.889893, 2085.51001, 2107.389893, 2063.110107, 2103.840088, 1972.180054, 1920.030029, 2079.360107, 2080.409912, 2043.939941, 1940.2399899999998, 1932.22998, 2059.73999, 2065.300049, 2096.949951, 2098.860107, 2173.600098, 2170.949951, 2168.27002, 2126.149902, 2198.810059, 2238.830078, 2278.8701170000004, 2363.639893, 2362.719971, 2384.199951, 2411.800049, 2423.409912, 2470.300049, 2471.649902, 2519.360107, 2575.26001, 2584.840088, 2673.610107, 2823.810059, 2713.830078, 2640.8701170000004, 2648.050049, 2705.27002, 2718.3701170000004, 2816.290039, 2901.52002, 2913.97998]
Your time series is pretty clearly not stationary, so that Yule-Walker assumptions are violated.
More generally, PACF is usually appropriate with stationary time series. You might difference your data first, before considering the partial autocorrelations.

Is there a list of Google Docs Equation Editor symbol names such as "\alpha"?

In Google Docs, you can insert an equation and edit it with the equation editor. You can add symbols like summations, integrals, and greek letters with the equation editor, but it is also possible to add them by typing "\sum" "\hat", "\alpha", etc into the equation.
Does Google provide a list of all of these keywords somewhere? It doesn't follow Latex, or the names of the symbols when you go to Insert>"Insert special characters". A lot of keywords that you would expect to work like "\integral" or "\capitaldelta" do not work.
It's a useful feature, but I can't find anything about it.
I found an interesting one at http://www.notuom.com/google-docs-equation-shortcuts.html
For later reference, a protected copy is also available at https://web.archive.org/web/20180625063351/http://www.notuom.com/google-docs-equation-shortcuts.html
So there are at least all these tags available:
letters: \alpha
\beta
\gamma
\delta
\epsilon
\varepsilon
\zeta
\eta
\theta
\vartheta
\iota
\kappa
\lambda
\mu
\nu
\xi
\pi
\varpi
\rho
\varrho
\sigma
\varsigma
\tau
\upsilon
\phi
\varphi
\chi
\psi
\omega
\Gamma
\Delta
\Theta
\Lambda
\Xi
\Pi
\Sigma
\Upsilon
\Phi
\Psi
\Omega
ops: \times
\div
\cdot
\pm
\mp
\ast
\star
\circ
\bullet
\oplus
\ominus
\oslash
\otimes
\odot
\dagger
\ddagger
\vee
\wedge
\cap
\cup
\aleph
\Re
\Im
\top
\bot
\infty
\partial
\forall
\exists
\neg
\triangle
\diamond
relations: \leq
\geq
\prec
\succ
\preceq
\succeq
\ll
\gg
\equiv
\sim
\simeq
\asymp
\approx
\ne
\subset
\supset
\subseteq
\supseteq
\in
\ni
\notin
maths: \frac
\sqrt
\rootof
\subsuperscript
\subscript or _
\superscript or ^
\overline
\widehat
\bigcapab
\bigcupab
\prodab
\coprodab
\rbracelr
\sbracelr
\bracelr
\abs
\intab
\ointab
\sumab
\limab
arrows: \leftarrow
\rightarrow
\leftrightarrow
\Leftarrow
\Rightarrow
\Leftrightarrow
\uparrow
\downarrow
\updownarrow
\Uparrow
\Downarrow
\Updownarrow
binomial: \choose
You can get all LaTeX commands with Auto-LaTeX Equations in Google Docs/Slides, and as a plus they look way better too. \integral would be \int for instance, and there are more choices here.

Sort list python3

I would like to order this list.
From:
01104D-BB'42
01104D-BB42
01104D-BB43
01104D-CC'42
01104D-CC'72
01104D-CC32
01104D-CC42
01104D-CC62
01104D-CC72
01104D-DD'74
01104D-DD'75
01104D-DD'76
01104D-DD'77
01104D-DD'78
01104D-DD75
01104D-DD76
01104D-DD77
01104D-DD78
01104D-EE'102
01104D-EE'12
01104D-EE'2
01104D-EE'32
01104D-EE'42
01104D-EE'52
01104D-EE'53
01104D-EE'72
01104D-EE'82
01104D-EE'92
01104D-EE102
01104D-EE12
01104D-EE2
01104D-EE3
01104D-EE32
01104D-EE42
01104D-EE52
01104D-EE62
01104D-EE72
01104D-EE82
01104D-EE83
01104D-EE92
01104D-EE93
To:
01104D-BB42
01104D-BB43
01104D-BB'42
01104D-CC32
01104D-CC42
01104D-CC62
01104D-CC72
01104D-CC'42
01104D-CC'72
01104D-DD75
01104D-DD76
01104D-DD77
01104D-DD78
01104D-DD'74
01104D-DD'75
01104D-DD'76
01104D-DD'77
01104D-DD'78
01104D-EE102
01104D-EE12
01104D-EE2
01104D-EE3
01104D-EE32
01104D-EE42
01104D-EE52
01104D-EE62
01104D-EE72
01104D-EE82
01104D-EE83
01104D-EE92
01104D-EE93
01104D-EE'102
01104D-EE'12
01104D-EE'2
01104D-EE'32
01104D-EE'42
01104D-EE'52
01104D-EE'53
01104D-EE'72
01104D-EE'82
01104D-EE'92
Can you help me?
thanks
I'm guessing here, because you haven't explained how you want the sort to be done. But it looks like you want the character ' to sort after the digits 0-9, and the ascii sort order puts it before the digits. If that is correct, then you need to substitute a different character for '. A good choice might be ~ because it is the last printable ascii character.
If your data is in mylist, then
mylist.sort(key=lambda a: a.replace("'","~"))
will sort it in the order I'm guessing you want.

Japanese Unicode: Convert radical to regular character code

How can I convert Japanese radical characters into their "regular" kanji character counterparts?
For instance, the character for the radical fire is ⽕ (with a Unicode value of 12117)
And the regular character is 火 (with a Unicode value of 28779)
EDIT:
To clarify, the reason why I think I need this is because I would like to obtain the stroke information for each radical by using the kanjivg data set. However, (I need to look into this further), I'm not sure if kanjivg has stroke data for the radical characters, but it definitely has stroke data for the regular kanji characters.
The language that I'm working with is Java - but I assumed that conversion would be similar for any language.
Using RADKFILE for this was was a neat idea (#Paul) but I don't think it uses Kangxi radicals because it's encoded in EUC-JP and if my browser (or Github) doesn't automatically convert between Kangxi/kanji, the list only has non-Kangxi characters as long as we're talking about Unicode.
The Unicode range for Kangxi radicals is on this Wikipedia page: Unicode/Character reference/2000-2FFF (bottom).
Somebody has created a mapping between them: Kanji to Kangxi Radical remapping tables. I did not check the correctness but when you convert the code points to characters you can see if they look the same. Here's how you do it in Java: Creating Unicode character from its number
Here is the list in CSV for convenience (kanji,radical):
0x4E00,0x2F00
0x4E28,0x2F01
0x4E36,0x2F02
0x4E3F,0x2F03
0x4E59,0x2F04
0x4E85,0x2F05
0x4E8C,0x2F06
0x4EA0,0x2F07
0x4EBA,0x2F08
0x513F,0x2F09
0x5165,0x2F0A
0x516B,0x2F0B
0x5182,0x2F0C
0x5196,0x2F0D
0x51AB,0x2F0E
0x51E0,0x2F0F
0x51F5,0x2F10
0x5200,0x2F11
0x529B,0x2F12
0x52F9,0x2F13
0x5315,0x2F14
0x531A,0x2F15
0x5338,0x2F16
0x5341,0x2F17
0x535C,0x2F18
0x5369,0x2F19
0x5382,0x2F1A
0x53B6,0x2F1B
0x53C8,0x2F1C
0x53E3,0x2F1D
0x56D7,0x2F1E
0x571F,0x2F1F
0x58EB,0x2F20
0x5902,0x2F21
0x590A,0x2F22
0x5915,0x2F23
0x5927,0x2F24
0x5973,0x2F25
0x5B50,0x2F26
0x5B80,0x2F27
0x5BF8,0x2F28
0x5C0F,0x2F29
0x5C22,0x2F2A
0x5C38,0x2F2B
0x5C6E,0x2F2C
0x5C71,0x2F2D
0x5DDB,0x2F2E
0x5DE5,0x2F2F
0x5DF1,0x2F30
0x5DFE,0x2F31
0x5E72,0x2F32
0x5E7A,0x2F33
0x5E7F,0x2F34
0x5EF4,0x2F35
0x5EFE,0x2F36
0x5F0B,0x2F37
0x5F13,0x2F38
0x5F50,0x2F39
0x5F61,0x2F3A
0x5F73,0x2F3B
0x5FC3,0x2F3C
0x6208,0x2F3D
0x6236,0x2F3E
0x624B,0x2F3F
0x652F,0x2F40
0x6534,0x2F41
0x6587,0x2F42
0x6597,0x2F43
0x65A4,0x2F44
0x65B9,0x2F45
0x65E0,0x2F46
0x65E5,0x2F47
0x66F0,0x2F48
0x6708,0x2F49
0x6728,0x2F4A
0x6B20,0x2F4B
0x6B62,0x2F4C
0x6B79,0x2F4D
0x6BB3,0x2F4E
0x6BCB,0x2F4F
0x6BD4,0x2F50
0x6BDB,0x2F51
0x6C0F,0x2F52
0x6C14,0x2F53
0x6C34,0x2F54
0x706B,0x2F55
0x722A,0x2F56
0x7236,0x2F57
0x723B,0x2F58
0x723F,0x2F59
0x7247,0x2F5A
0x7259,0x2F5B
0x725B,0x2F5C
0x72AC,0x2F5D
0x7384,0x2F5E
0x7389,0x2F5F
0x74DC,0x2F60
0x74E6,0x2F61
0x7518,0x2F62
0x751F,0x2F63
0x7528,0x2F64
0x7530,0x2F65
0x758B,0x2F66
0x7592,0x2F67
0x7676,0x2F68
0x767D,0x2F69
0x76AE,0x2F6A
0x76BF,0x2F6B
0x76EE,0x2F6C
0x77DB,0x2F6D
0x77E2,0x2F6E
0x77F3,0x2F6F
0x793A,0x2F70
0x79B8,0x2F71
0x79BE,0x2F72
0x7A74,0x2F73
0x7ACB,0x2F74
0x7AF9,0x2F75
0x7C73,0x2F76
0x7CF8,0x2F77
0x7F36,0x2F78
0x7F51,0x2F79
0x7F8A,0x2F7A
0x7FBD,0x2F7B
0x8001,0x2F7C
0x800C,0x2F7D
0x8012,0x2F7E
0x8033,0x2F7F
0x807F,0x2F80
0x8089,0x2F81
0x81E3,0x2F82
0x81EA,0x2F83
0x81F3,0x2F84
0x81FC,0x2F85
0x820C,0x2F86
0x821B,0x2F87
0x821F,0x2F88
0x826E,0x2F89
0x8272,0x2F8A
0x8278,0x2F8B
0x864D,0x2F8C
0x866B,0x2F8D
0x8840,0x2F8E
0x884C,0x2F8F
0x8863,0x2F90
0x897E,0x2F91
0x898B,0x2F92
0x89D2,0x2F93
0x8A00,0x2F94
0x8C37,0x2F95
0x8C46,0x2F96
0x8C55,0x2F97
0x8C78,0x2F98
0x8C9D,0x2F99
0x8D64,0x2F9A
0x8D70,0x2F9B
0x8DB3,0x2F9C
0x8EAB,0x2F9D
0x8ECA,0x2F9E
0x8F9B,0x2F9F
0x8FB0,0x2FA0
0x8FB5,0x2FA1
0x9091,0x2FA2
0x9149,0x2FA3
0x91C6,0x2FA4
0x91CC,0x2FA5
0x91D1,0x2FA6
0x9577,0x2FA7
0x9580,0x2FA8
0x961C,0x2FA9
0x96B6,0x2FAA
0x96B9,0x2FAB
0x96E8,0x2FAC
0x9751,0x2FAD
0x975E,0x2FAE
0x9762,0x2FAF
0x9769,0x2FB0
0x97CB,0x2FB1
0x97ED,0x2FB2
0x97F3,0x2FB3
0x9801,0x2FB4
0x98A8,0x2FB5
0x98DB,0x2FB6
0x98DF,0x2FB7
0x9996,0x2FB8
0x9999,0x2FB9
0x99AC,0x2FBA
0x9AA8,0x2FBB
0x9AD8,0x2FBC
0x9ADF,0x2FBD
0x9B25,0x2FBE
0x9B2F,0x2FBF
0x9B32,0x2FC0
0x9B3C,0x2FC1
0x9B5A,0x2FC2
0x9CE5,0x2FC3
0x9E75,0x2FC4
0x9E7F,0x2FC5
0x9EA5,0x2FC6
0x9EBB,0x2FC7
0x9EC3,0x2FC8
0x9ECD,0x2FC9
0x9ED1,0x2FCA
0x9EF9,0x2FCB
0x9EFD,0x2FCC
0x9F0E,0x2FCD
0x9F13,0x2FCE
0x9F20,0x2FCF
0x9F3B,0x2FD0
0x9F4A,0x2FD1
0x9F52,0x2FD2
0x9F8D,0x2FD3
0x9F9C,0x2FD4
0x9FA0,0x2FD5
It's not entirely clear why you want this, but one possible way to do it is with Jim Breen's radkfile file that maps radicals to associated kanjis and the reverse. Combine that with some heuristics and Breen's kanjidic file (to the extent that these resources are reliable), and you can pretty easily generate a mapping. Here's an example in Python, using the cjktools library, which has Python wrappers for these things.
from cjktools.resources.radkdict import RadkDict
from cjktools.resources.kanjidic import Kanjidic
def make_rad_to_kanji_dict():
rdict = RadkDict()
kdict = Kanjidic()
# Get all the radicals where there are kanji made up entirely of the one
# radical - the ones we want are a subset of those
tmp = ((rads[0], kanji) for kanji, rads in rdict.items()
if len(rads) == 1)
# All the ones with the same number of strokes - should be all the ones that
# are homographs
out = {rad: kanji for rad, kanji in tmp
if (kanji in kdict and
kdict[kanji].stroke_count == rdict.radical_to_stroke_count[rad])}
return out
RAD_TO_KANJI_DICT = make_rad_to_kanji_dict()
if __name__ == "__main__":
print(RAD_TO_KANJI_DICT['⽕'])
You can iterate through the file it generates and output a static mapping pretty easily. There may be existing homograph lists for that sort of thing, but I don't know of any. radkdict only has 128 kanji consisting of exactly 1 radical, so it is also a simple matter to just enumerate all of those and manually check which ones match your criteria.
Note: I looked through the list of things that are caught by the "consisting of exactly one radical" heuristic but skipped over in the "has the same stroke order" list, it seems that '老' (radical) -> '老' (kanji) and '刈' (radical) -> '刈' (kanji) are the only ones that, for whatever reason, don't get caught by this. Here is a CSV generated with this method.

Resources