What type of SVG is this? - svg

I'm attempting to reverse engineer an SVG animation in JavaScript to better understand the animation and I'm seeing the following SVG code representing an "Up" motion in JavaScript. However the SVG itself doesn't look like any typical SVG code I am used to using. Can you help identify how this SVG is structured? Or how I can adjust the follwing code so I can open it in an image editing software?
d 601 9aAaAaAnBkNnUaNaN"/D 18 10bAaAnAnBuXaN"/F 22 10W7AaAaBaEaGiAW-6NiNnXaNbUaNaNaN"/D 30 10bAaEuUnU"/D 114 10bAaAnAnBuXaN"/F 117 10W7AaAaBaBaAaGkAn0NkNnKaNaNaUaNaU"/D 125 10eAaGnAnUnUnU"/D 66 12eBnAnAnNnUaN"/F 70 12W6AaAaAbEaGkAn2NuKaNaUaNaNaNaN"/D 76 12gEuNnNnN"/D 593 12eBnAnAuKaN"/F 596 12eAaAeUbAnEbKeAnAbJnAiAxNxAkAnUaXnNbNaNaU"/D 604 13bEuK"/D 166 14eEnAkKaN"/D 608 14aAnN"/F 169 15eAeAaAaBaAaAnAnBn0NkNnNnKbNgNaU"/D 222 15aAbBaGxKnKaN"/D 175 16gEnAnUnNnN"/D 308 16aAaAaAaBuAnAnEaAaAaEnAuNnNuNnUnNaUbw-7bN"/D 314 16gAaGkNuX"/D 268 17eAaAaAaAaAaGnBnNuUuUaUiNaNaU"/D 501 17bAaAuAnAnKaN"/D 548 17eAnAnAnAnKaN"/F 552 17jEW-6AkNaNaNbN"/D 557 17gExK"/D 209 18bAaEeAaBnAuAeAW8NnNnKgGnBn1NkGaAuNnNnXaUnw-6aN"/F 216 18jAaEgAxEaAaAW-8NkNbNaUnNkUaAeK"/D 260 18eAaBnAuAnEaBnAnBeAaEnAkNnXnNnNnXaNbXaNaN"/D 364 18eAaBuNkNaN"/D 509 18bAaBnAaEnAeAaAaBnAaEnAxXaNuNuNW-8AnAuUaUaNbNW6AeKnNnUaN"/D 159 19bAaBaAaAeAa0UaNaNeGnAnAnAiNn3NnNnNnXaN"/F 213 19aAnN"/D 214 19bBkNaN"/D 356 19eAnAnAnAnBbNW9NeAaAbAaAaBaBnBkBxUnNaNeUaNnUuNW-9AkAnAnEaAeAaBnAkUkNnXaNaNaw-6aNaN"/D 460 19eAaBuAnNnK"/F 502 19W6BaAaEkNW-6AuAnUaUaNaNaN"/F 312 20gAgAnAnBnAaAaEaAaNaGuAkAW-7NuNnKaAbNaKnNnNnKaNaNbN"/F 358 20W8AbAaAkAW-9AuUaNaNaN"/D 403 20eBnBnNnK"/F 407 20jAbAaAaAaGnAkAW-8NnNnUaUaUaNaN"/D 412 20gEnNnNuN"/D 451 20gAnAnAnBnI"/F 455 20jBaAaEbNaNaGuAnAn1NnXaUaNaNaN"/D 551 20W6AeAaAaAaEnNuNuNW-9AuAnKaNaNeN"/F 557 20gAaBnNnNkN"/D 17 21jAW6NjAaAaBaw6nJnBnAxNnUnNaNbNaNaKnKnNnNW-8AuAnAnGaBbAaAbAnAuAuNnNnw-7nIaUaN"/D 113 21eAa0NeAaAaBaBnEuKnNuNW-7AuAnBnBnBaAaAeBnAnAxNnw-6nNnKaKaUaU"/F 263 21W8BnBbBnNuEbNaNbAaBnNuAnEaBnAW-9NaKnNkUaNaBeUnNnAnUnKaNbN"/D 320 21ew9uAnNnKnNnNaUaNaN"/F 511 21aAnN"/D 596 21gAgNjAbBaEaBnBnJnBuAuNnNnXaNbNbNbNnNnNnNnNkNW-8AuAuKaNeN"/D 65 22aAa2NeAaBaw6uUnUnNuNW-6AkAnAnAnBuw-6aUaN"/F 462 22bAuN"/D 462 23bAaAnAuK"/F 464 23aAnN"/F 512 23aEuNaU"/F 21 24W8AaAaEaEnAnAuAW-7NnNuUnXaNaNbN"/D 417 24aAaBaAnBnAiAW-9NuNnKaNaBaAaAW8NeNaX"/F 549 24W9AbAbAaBxAnBnAW-6NuNuUnKaNbN"/F 596 24W8AeAaAaAaAaAuAuAuAnExNuNuNnNnNnKnUbNbN"/F 71 25W6AbAaBaBaAnAnAnAkAiNkNnNnNnKaNaNaNeN"/F 119 25W7AbAaEaBnAuAnAkAxNkNnNnUaUaUaNbN"/D 265 25bAuN"/F 359 25W9AbBaAnBkAnAaBuAW-7NaUnNkNnKaNaNeN"/D 403 25aAnN"/D 449 25bGaAa1NaNbNaAbJnNuNW-6AW-8NnNnNnX"/D 269 26bAaAnAuK"/D 361 26bAuN"/D 365 26bAaAnAuK"/F 161 27a3AgGnBuAkAW-6NkNnNnXnNaN"/D 262 27aAaBkUaN"/D 357 27bAaBnAuUnNaN"/F 497 27aAnN"/F 211 28eAa1GnBnNnAkAxNkNnNaNnX"/F 500 28W8AbAbAnGkAW-6NuNnNnUnNbNaN"/D 592 28aEaAaAaAbEnAkNnNnw-8"/D 158 29eGaAuNuX"/D 272 29bAaAaAnAnAuNnKaN"/D 546 29aBbAbEnAuNnNnI"/D 559 29gJnAxNnUaUaN"/F 418 30aGuAkAW-8NuKW9NjN"/F 403 31aAnN"/F 460 31W6AbAuAuAxAiNuNnUW8N"/D 65 32aAaAaAeAnAnAuNuI"/D 81 32aGnBxNnUeNaNaN"/D 129 32aEnAnAxNnNeNaNbN"/D 177 32aw6uAuNnNnUeNbU"/F 275 32aAnN"/F 274 33aAnN"/D 222 34aBnAkUeN

Related

Gnuplot fit error - singular matrix in Givens()

So I want to fit a function with a dataset using gnuplot. In the file "cn20x2012", at the lines [1:300] I have this data:
1 -7.576723949519277e-06
2 4.738414366971162e-05
3 2.5908117324519247e-05
4 7.233786749999952e-06
5 4.94720225240387e-06
6 -1.857620375000113e-06
7 5.697280584855734e-06
8 -1.867760712716345e-05
9 6.64096591257211e-05
10 2.756199717307687e-05
11 4.7755705550480866e-05
12 6.590865376225963e-05
13 4.1522206877403805e-05
14 3.145294946394234e-05
15 5.9346948090625035e-05
16 5.405458204471163e-05
17 0.0001484469089218749
18 0.00011236895265264405
19 0.00010798644697620197
20 8.656723035552881e-05
21 0.00019917737876442313
22 0.00022625750686778835
23 0.00023183354141658626
24 0.0003373178915148073
25 0.00032313619574999994
26 0.0003451188893915866
27 0.0003303809005983172
28 0.0003534148565745192
29 0.00039690566743750015
30 0.0004182810016802884
31 0.00045198626877403865
32 0.00047311462195192373
33 0.0004962054400408655
34 0.0004969566757524037
35 0.0005561838221274039
36 0.0005353567324539659
37 0.00052834133201923
38 0.0005980226227637016
39 0.0005446277144831731
40 0.0005960780049278846
41 0.0006076488594567314
42 0.000710219997610289
43 0.0006714079307259616
44 0.0006990041531870184
45 0.000694646402266827
46 0.0006910307645889419
47 0.0007918124250492787
48 0.0007699669760728367
49 0.0007850042712259613
50 0.0007735240355776444
51 0.0008333605652980768
52 0.0007914544977620185
53 0.0008254284036610573
54 0.0008578590784536057
55 0.0008597165395913466
56 0.0009350752655120189
57 0.0009355867078822116
58 0.0009413161534519229
59 0.001003045837043269
60 0.0009530084342740383
61 0.000981287851927885
62 0.000986143934318509
63 0.00096895140692548
64 0.0010671633388319713
65 0.0010884129846995196
66 0.0010974424039567304
67 0.0011198829067163459
68 0.0010649422789374995
69 0.0010909547135769227
70 0.0010858300892451934
71 0.00114890178018774
72 0.0011503018930817308
73 0.0012209814370937495
74 0.001264080502711538
75 0.0012453762294132222
76 0.0012725116258625
77 0.0012649334953990384
78 0.0012195748153341352
79 0.0013151443892213466
80 0.0013003322635283651
81 0.0013099768888799042
82 0.0013227992394807694
83 0.0013325137669168274
84 0.001356943212587259
85 0.0014541924819278852
86 0.0014094004314177883
87 0.0014273633669975969
88 0.0014393176087403859
89 0.0014372794673365393
90 0.0015051545220959143
91 0.0015432813234807683
92 0.0015832276965293275
93 0.001540622433288461
94 0.0016007491118125
95 0.0016195978358533654
96 0.0016447077023067317
97 0.0016350138695504803
98 0.0017352804136629807
99 0.001731106189370192
100 0.0017407015898704323
101 0.0017367582300937506
102 0.0018164239404875008
103 0.0017829769448653838
104 0.0018303930988165871
105 0.0017893320000211548
106 0.0018727349292259614
107 0.0018745909637668267
108 0.0018425366172147846
109 0.0019053739892581727
110 0.0018849885474855762
111 0.0018689524590103368
112 0.0019431807910961535
113 0.001951890517350962
114 0.0019308973497776446
115 0.0019990349471177894
116 0.002009245176572116
117 0.0020004240575882213
118 0.002020795320423557
119 0.0020148423748725963
120 0.002070277553975961
121 0.002112121992170673
122 0.002081609846093749
123 0.0020899822853341346
124 0.002214996736841347
125 0.002210968677028846
126 0.002204230691923077
127 0.0022059340675168264
128 0.002244672249610577
129 0.002243725570633895
130 0.002198417606970913
131 0.002326686848007212
132 0.002298981945014423
133 0.002412905193465384
134 0.0023317473012668287
135 0.0023255737818221145
136 0.0024042900543605767
137 0.0023814333208341345
138 0.002414946342495192
139 0.002451134140336538
140 0.002435468088014424
141 0.002541540709086779
142 0.0024759180712812523
143 0.002562872725209133
144 0.002554363054353367
145 0.002525350243064904
146 0.0026228594448966342
147 0.002640361090600963
148 0.0026968734518557683
149 0.002687729582449518
150 0.0026799173813848555
151 0.002751626483175481
152 0.0026916526068317286
153 0.002682602742860577
154 0.0027658840884567304
155 0.0028385319315024035
156 0.002733288245524039
157 0.002805041072350961
158 0.002798724552451201
159 0.00284738398885577
160 0.002833892571264423
161 0.0028506943730673084
162 0.0028578405825413463
163 0.0028141271324870197
164 0.0029047532288887
165 0.002916689246838943
166 0.003006111659274039
167 0.0030388357088942325
168 0.0030117903270181707
169 0.003023639132084136
170 0.0030182642660336535
171 0.0029788478969250015
172 0.003086049268993511
173 0.0030530940010240377
174 0.00309287048297596
175 0.0030892688902187473
176 0.0032070964353437493
177 0.0031308958387163454
178 0.003262165689711538
179 0.0032348496648947093
180 0.003334092027257212
181 0.0032702121678230764
182 0.0032887867663149036
183 0.00333782536743269
184 0.0033132179587812513
185 0.003400563164048078
186 0.003322215536028365
187 0.0033691419445264436
188 0.00340692471343654
189 0.003370118822997599
190 0.003414042435545674
191 0.003460621729710913
192 0.003487680921019232
193 0.0034814484875360595
194 0.003528280852358173
195 0.0035260558732403864
196 0.0035947047098653846
197 0.003583761358336538
198 0.003589446784643749
199 0.0035488957604610572
200 0.0036106514596322115
201 0.003633161542855769
202 0.003596668943564904
203 0.003621647520017789
204 0.0037260161142259616
205 0.0036873544761057684
206 0.003693311409786057
207 0.0037485618958747594
208 0.0037277801700697126
209 0.003731768419286058
210 0.0037200943660144225
211 0.0037368698886754786
212 0.0038266932486634626
213 0.003786905602120193
214 0.0038484308669038464
215 0.003837662506102065
216 0.003877989966946875
217 0.0038711451977908673
218 0.0039796825709810125
219 0.003955763375971154
220 0.003983664920576924
221 0.004019112007471154
222 0.003996646585913461
223 0.004061509550884613
224 0.004015245551199519
225 0.004009779120920672
226 0.004148229009661058
227 0.0040645974335312505
228 0.0041522345293678545
229 0.004216267765944711
230 0.004191517977733654
231 0.004280319721466346
232 0.004210795761447114
233 0.004258393462563462
234 0.004267925011272355
235 0.00427713419340625
236 0.004323331966394231
237 0.004361159201735935
238 0.004351708975694715
239 0.004359997178644953
240 0.00437384325853894
241 0.004375188742463941
242 0.004424559629495192
243 0.004461955226487498
244 0.004489655863850963
245 0.0045503420149230756
246 0.0045185560829999975
247 0.004506067166336778
248 0.004585396025798076
249 0.004530840472406252
250 0.0045934151490120215
251 0.004602146584228363
252 0.004643262102497593
253 0.004707265035608172
254 0.004766505116052884
255 0.004744165929896635
256 0.0047756718030625015
257 0.004802170611427885
258 0.004896239463478368
259 0.0048845448341901425
260 0.004845213594302884
261 0.004915008781204327
262 0.004838528640802884
263 0.0048121374747617796
264 0.004895357859576925
265 0.0048793476575266816
266 0.004958465852682693
267 0.005007965180538941
268 0.0049839032653341345
269 0.005068383734646637
270 0.00498556504900495
271 0.005014623260019232
272 0.005066327855785335
273 0.0050290740743365375
274 0.005152934708140861
275 0.005174238921781968
276 0.005123581464772355
277 0.005155969777822114
278 0.005169396608004327
279 0.00516497090489663
280 0.005145110646115385
281 0.005209611399110575
282 0.005163211771749997
283 0.005181044847507209
284 0.005281641245183894
285 0.005323840847189907
286 0.005230924322329326
287 0.005256136984014422
288 0.005374876757439424
289 0.0053137727444009615
290 0.005468482116127402
291 0.005453857539401205
292 0.005417081656274039
293 0.005393994523838937
294 0.005506909240446873
295 0.005449365350307692
296 0.005551215606367787
297 0.005505932791992786
298 0.0055918512302572145
299 0.005663100163579326
300 0.0056382443690432705
When I do
f(x) = a/b*(1-exp(-b*x))
fit[1:300] f(x) "cn20x2012" using 1:2 via a,b
The curve fits perfectly. But when I try to fit the curve with
a/b*(1-exp(-b*x/(3e-26))
I get the error message. Note that I've only added a constant to the exponential part of the function.
What can I do to fit the function with the constant 3e-26?
I'm using gnuplot 5.2 patchlevel 8 on linux
Adding that constant makes the values of exp(-b*x/(3.e-26) so close to zero that the term (1-exp(-b*x/(3e-26)) differs from 1 by less than the precision available for IEEE double precision floating point numbers. So you are essentially fitting the function g(x) = a/b, which is a very poor fit to your data.
Since you already have a good fit using your original function f(x), perhaps you can explain what your goal is to change the function to something else? What question are you trying to answer?

Why doesn't the seaborn plot show a confidence interval?

I am using sns.lineplot to show the confidence intervals in a plot.
sns.lineplot(x = threshold, y = mrl_array, err_style = 'band', ci=95)
plt.show()
I'm getting the following plot, which doesn't show the confidence interval:
What's the problem?
There is probably only a single observation per x value.
If there is only one observation per x value, then there is no confidence interval to plot.
Bootstrapping is performed per x value, but there needs to be more than one obsevation for this to take effect.
ci: Size of the confidence interval to draw when aggregating with an estimator. 'sd' means to draw the standard deviation of the data. Setting to None will skip bootstrapping.
Note the following examples from seaborn.lineplot.
This is also the case for sns.relplot with kind='line'.
The question specifies sns.lineplot, but this answer applies to any seaborn plot that displays a confidence interval, such as seaborn.barplot.
Data
import seaborn as sns
# load data
flights = sns.load_dataset("flights")
year month passengers
0 1949 Jan 112
1 1949 Feb 118
2 1949 Mar 132
3 1949 Apr 129
4 1949 May 121
# only May flights
may_flights = flights.query("month == 'May'")
year month passengers
4 1949 May 121
16 1950 May 125
28 1951 May 172
40 1952 May 183
52 1953 May 229
64 1954 May 234
76 1955 May 270
88 1956 May 318
100 1957 May 355
112 1958 May 363
124 1959 May 420
136 1960 May 472
# standard deviation for each year of May data
may_flights.set_index('year')[['passengers']].std(axis=1)
year
1949 NaN
1950 NaN
1951 NaN
1952 NaN
1953 NaN
1954 NaN
1955 NaN
1956 NaN
1957 NaN
1958 NaN
1959 NaN
1960 NaN
dtype: float64
# flight in wide format
flights_wide = flights.pivot("year", "month", "passengers")
month Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
year
1949 112 118 132 129 121 135 148 148 136 119 104 118
1950 115 126 141 135 125 149 170 170 158 133 114 140
1951 145 150 178 163 172 178 199 199 184 162 146 166
1952 171 180 193 181 183 218 230 242 209 191 172 194
1953 196 196 236 235 229 243 264 272 237 211 180 201
1954 204 188 235 227 234 264 302 293 259 229 203 229
1955 242 233 267 269 270 315 364 347 312 274 237 278
1956 284 277 317 313 318 374 413 405 355 306 271 306
1957 315 301 356 348 355 422 465 467 404 347 305 336
1958 340 318 362 348 363 435 491 505 404 359 310 337
1959 360 342 406 396 420 472 548 559 463 407 362 405
1960 417 391 419 461 472 535 622 606 508 461 390 432
# standard deviation for each year
flights_wide.std(axis=1)
year
1949 13.720147
1950 19.070841
1951 18.438267
1952 22.966379
1953 28.466887
1954 34.924486
1955 42.140458
1956 47.861780
1957 57.890898
1958 64.530472
1959 69.830097
1960 77.737125
dtype: float64
Plots
may_flights has one observation per year, so no CI is shown.
sns.lineplot(data=may_flights, x="year", y="passengers")
sns.barplot(data=may_flights, x='year', y='passengers')
flights_wide shows there are twelve observations for each year, so the CI shows when all of flights is plotted.
sns.lineplot(data=flights, x="year", y="passengers")
sns.barplot(data=flights, x='year', y='passengers')

How to Put Two Sets of Data on One Graph in Excel

Date Issue redmeption App Date Issue redmeption App
21-Nov 891 200 523 28-Nov 660 179 302
22-Nov 607 125 423 29-Nov 712 165 420
23-Nov 456 165 422 30-Nov 499 128 331
24-Nov 510 115 391 1-Dec 596 170 392
25-Nov 525 120 400 2-Dec 573 169 397
26-Nov 585 158 396 3-Dec 450 120 350
27-Nov 582 88 410 4-Dec 650 150 360
Try creating you chart with the x & y axis data then using the "add data" function in the chart menu.

How do I get or produce the Unicode needed in Tesseract box file?

In tesseract's google documentation here https://code.google.com/p/tesseract-ocr/wiki/TrainingTesseract3 here, there is a instruction that I have to get Unicode for the generated characters in my box files.It looks like this
s 734 494 751 519 0
p 753 486 776 518 0
r 779 494 796 518 0
i 799 494 810 527 0
n 814 494 837 518 0
g 839 485 862 518 0
t 865 492 878 521 0
u 101 453 122 484 0
b 126 453 146 486 0
e 149 452 168 477 0
r 172 453 187 476 0
d 211 451 232 484 0
e 236 451 255 475 0
n 259 452 281 475 0
Now, my question is where or how I get this? I am developing an OCR for Bangla language.
The box file is a UTF-8 encoded text file. You can use a Unicode-compatible text editor, or a box file editor, to open and edit the characters with your favorite Bangla input method.

Most efficient compression extremely large data set

I'm currently generating an extremely large data set on a remote HPC (high performace computer). We are talking about 3 TB at the moment, and it could reach up to 10 TB once I'm done.
Each of the 450 000 files ranges from a few KB to about 100 MB and contains lines of integers with no repetitive/predictable patterns. Moreover they are split among 150 folders (I use the path to classify them according to the input parameters). Now that could be fine, but my research group is technically limited to 1TB of disk space on the remote server, although the admin are willing to close their eyes until the situation gets sorted out.
What would you recommend to compress such a dataset?
A limitation is that tasks can't run more than 48 hours at a time on this computer. So long but efficient compression methods are possible only if 48 hours is enough... I really have no other options as neither me, neither my group own enough disk space on other machines.
EDIT: Just to clarify, this a remote computer that runs on some variation of linux. All standard compression protocols are available. I don't have super user rights.
EDIT2: As request by Sergio, here is a sample output (first 10 lines of a files)
27 42 46 63 95 110 205 227 230 288 330 345 364 367 373 390 448 471 472 482 509 514 531 533 553 617 636 648 667 682 703 704 735 740 762 775 803 813 882 915 920 936 939 942 943 979 1018 1048 1065 1198 1219 1228 1513 1725 1888 1944 2085 2190 2480 5371 5510 5899 6788 7728 9514 10382 11946 13063 13808 16070 23301 23511 24538
93 94 106 143 157 164 168 181 196 293 299 334 369 372 439 457 508 527 547 557 568 570 573 592 601 668 701 704 799 838 848 870 875 882 890 913 953 959 1022 1024 1037 1046 1169 1201 1288 1615 1684 1771 2043 2204 2348 2387 2735 3149 4319 4890 4989 5321 5588 6453 7475 9277 9649 9654 11433 16966
1463
183 469 514 597 792
25 50 143 152 205 244 253 424 433 446 461 476 486 545 552 570 632 642 647 665 681 682 718 735 746 772 792 811 830 851 891 903 925 1037 1115 1147 1171 1612 1979 2749 3074 3158 6042 12709 20571 20859
24 30 86 312 726 875 1023 1683 1799
33 36 42 65 110 112 122 227 241 262 274 284 305 328 353 366 393 414 419 449 462 488 489 514 635 690 732 744 767 772 812 820 843 844 855 889 893 925 936 939 981 1015 1020 1060 1064 1130 1174 1304 1393 1477 1939 2004 2200 2205 2208 2216 2234 3284 4456 5209 6810 6834 8067 10811 10895 12771 15291
157 761 834 875 1001 2492
21 141 146 169 181 256 266 337 343 367 397 402 405 433 454 466 513 527 656 684 708 709 732 743 811 883 913 938 947 986 987 1013 1053 1190 1215 1288 1289 1333 1513 1524 1683 1758 2033 2684 3714 4129 6015 7395 8273 8348 9483 23630
1253
All integers are separated by one whitespace, and each line corresponds to a given element. I use implicit line numbers to store this information, because my data is assosiative i.e. the 0th element is associated to elements 27 42 46 63 110.. etc. I believe that there is no extra information whatsoever.
A few points that may help:
It looks like your numbers are sorted. If this is always the case, then it will be more efficient to compress the differences between adjacent numbers rather than the numbers themselves (since the differences will be somewhat smaller on average)
There are good ways of encoding small integer values in binary format, that are probably better than encoding them in text format. See the technique used by Google in their protocol buffers: (https://developers.google.com/protocol-buffers/docs/encoding)
Once you have applied the above techniques, then zipping / some standard form of compression should improve everything even further.
There is some research done at this LINK that breaks down the pro/cons of using gzip, bzip2, and lzma. Hopefully this can let you make an informed decision on your best approach.
All your numbers seem to be increasing in size (each line). A rather common approach in database technology would be to only store the size difference, making a line like
24 30 86 312 726 875 1023 1683 1799
to something like
6 56 226 414 149 148 660 116
Other lines of your example would even show more benefit, as the differences are smaller. This also works when the numbers decrease in-between, but you have to be able to deal with negative differences then.
Second thing to do would be changing the encoding. While compression will reduce this overhead, you're currently using 8 bit per digit, whereas you only need 4 bit of those (0-9, space as divisor). Implementing your own "4 bit character set" will already cut your storage requirements to half of the current size! In the end, this would be some kind of binary encoding of numbers of arbitrary length.

Resources