How to use arbitrary selector in interchange in J lang? - j

Let's assume we have a vector and matrix like below:
r =: 100 + 5 5 $ i.25
r
100 101 102 103 104
105 106 107 108 109
110 111 112 113 114
115 116 117 118 119
120 121 122 123 124
v =: 100 + 5 $ i.5
v
100 101 102 103 104
Now I would like to have a way to interchange fragments as specified by selectors.
I know how I can exchange items:
(<0 _1) &C. v
104 101 102 103 100
here I interchanged element at index=0 and index=-1.
In case of matrix, the rows (items) are changed:
(<0 _1) &C. r
120 121 122 123 124
105 106 107 108 109
110 111 112 113 114
115 116 117 118 119
100 101 102 103 104
But what about if I want to specify two arbitrary selections. Example to what I am after:
sel1 =: (< (<0 1))
sel1 { v
100 101
sel2 =: (< (<2 3))
sel2 { v
102 103
sel1 sel2 INTERCHANGE v
102 103 100 101 104
And the same for matrix:
sel1 =: (< (<0 1),(<0 1))
sel1 { r
100 101
105 106
sel2 =: (< (<3 4),(<1 2))
sel2 { r
116 117
121 122
sel1 sel2 INTERCHANGE r
116 117 102 103 104
121 122 107 108 109
110 111 112 113 114
115 100 101 118 119
120 105 106 123 124
So my question would be how to define elegantly interchange that uses two selections?

I think that I would first create the two selections and then use Amend to swap them. May not be the most elegant or generalizable, but if you know the selections that you want to change and they are the same shape, it does work.
r
100 101 102 103 104
105 106 107 108 109
110 111 112 113 114
115 116 117 118 119
120 121 122 123 124
[rep=:((<3 4;1 2),(<0 1;0 1)) { r NB. rep is the selected replacement values
116 117
121 122
100 101
105 106
((<0 1;0 1),(<3 4;1 2)){ r NB. values that will be replaced (just a check that they are the same shape)
100 101
105 106
116 117
121 122
rep ((<0 1;0 1),(<3 4;1 2))} r NB. Select verb ({) changed to Amend adverb (})
116 117 102 103 104
121 122 107 108 109
110 111 112 113 114
115 100 101 118 119
120 105 106 123 124

Related

How to define selection using index function in J

Let's assume I have a following tensor t:
]m=: 100 + 4 4 $ i.16
100 101 102 103
104 105 106 107
108 109 110 111
112 113 114 115
]t=: (m ,: m+100) , m+200
100 101 102 103
104 105 106 107
108 109 110 111
112 113 114 115
200 201 202 203
204 205 206 207
208 209 210 211
212 213 214 215
300 301 302 303
304 305 306 307
308 309 310 311
312 313 314 315
I would like to select diagonal plane of it, so :
100 105 110 115
200 205 210 215
300 305 310 315
How to define function that acts on indices? (and here have for any plane index let's choose ix(row) = ix (column)) Also, how to define functions working on values and indices together? So I would be interested in having something like this:
(f t) { t
Thanks!
Transpose x|:y with boxed arguments runs the axes together to produce a single axis. You can use this to produce a rather idiomatic solution:
(< 0 1) |: m
100 105 110 115
(<0 1) |:"2 t
100 105 110 115
200 205 210 215
300 305 310 315
where you use the rank " verb to apply the diagonal selection to 2-boxes.
You can convert an array of values to its corresponding array of indices with (#:i.)#$ m
To get an example f „working on values and indices together“ you can then plug it in as a dyad that takes values on the left and indices on the right:
f=.(2|[) +. ([:=/"1]) NB. odd value or diagonal index
]r=.([ f (#:i.)#$) m NB. values f indices
1 1 0 1
0 1 0 1
0 1 1 1
0 1 0 1
r #&, m NB. flatten lists & get values where bit is set
100 101 103 105 107 109 110 111 113 115
Everything wrapped into an adverb that can be applied f:
sel=.1 : '#~&, [ u (#:i.)#$`
f sel m
100 101 103 105 107 109 110 111 113 115

Gnuplot fit error - singular matrix in Givens()

So I want to fit a function with a dataset using gnuplot. In the file "cn20x2012", at the lines [1:300] I have this data:
1 -7.576723949519277e-06
2 4.738414366971162e-05
3 2.5908117324519247e-05
4 7.233786749999952e-06
5 4.94720225240387e-06
6 -1.857620375000113e-06
7 5.697280584855734e-06
8 -1.867760712716345e-05
9 6.64096591257211e-05
10 2.756199717307687e-05
11 4.7755705550480866e-05
12 6.590865376225963e-05
13 4.1522206877403805e-05
14 3.145294946394234e-05
15 5.9346948090625035e-05
16 5.405458204471163e-05
17 0.0001484469089218749
18 0.00011236895265264405
19 0.00010798644697620197
20 8.656723035552881e-05
21 0.00019917737876442313
22 0.00022625750686778835
23 0.00023183354141658626
24 0.0003373178915148073
25 0.00032313619574999994
26 0.0003451188893915866
27 0.0003303809005983172
28 0.0003534148565745192
29 0.00039690566743750015
30 0.0004182810016802884
31 0.00045198626877403865
32 0.00047311462195192373
33 0.0004962054400408655
34 0.0004969566757524037
35 0.0005561838221274039
36 0.0005353567324539659
37 0.00052834133201923
38 0.0005980226227637016
39 0.0005446277144831731
40 0.0005960780049278846
41 0.0006076488594567314
42 0.000710219997610289
43 0.0006714079307259616
44 0.0006990041531870184
45 0.000694646402266827
46 0.0006910307645889419
47 0.0007918124250492787
48 0.0007699669760728367
49 0.0007850042712259613
50 0.0007735240355776444
51 0.0008333605652980768
52 0.0007914544977620185
53 0.0008254284036610573
54 0.0008578590784536057
55 0.0008597165395913466
56 0.0009350752655120189
57 0.0009355867078822116
58 0.0009413161534519229
59 0.001003045837043269
60 0.0009530084342740383
61 0.000981287851927885
62 0.000986143934318509
63 0.00096895140692548
64 0.0010671633388319713
65 0.0010884129846995196
66 0.0010974424039567304
67 0.0011198829067163459
68 0.0010649422789374995
69 0.0010909547135769227
70 0.0010858300892451934
71 0.00114890178018774
72 0.0011503018930817308
73 0.0012209814370937495
74 0.001264080502711538
75 0.0012453762294132222
76 0.0012725116258625
77 0.0012649334953990384
78 0.0012195748153341352
79 0.0013151443892213466
80 0.0013003322635283651
81 0.0013099768888799042
82 0.0013227992394807694
83 0.0013325137669168274
84 0.001356943212587259
85 0.0014541924819278852
86 0.0014094004314177883
87 0.0014273633669975969
88 0.0014393176087403859
89 0.0014372794673365393
90 0.0015051545220959143
91 0.0015432813234807683
92 0.0015832276965293275
93 0.001540622433288461
94 0.0016007491118125
95 0.0016195978358533654
96 0.0016447077023067317
97 0.0016350138695504803
98 0.0017352804136629807
99 0.001731106189370192
100 0.0017407015898704323
101 0.0017367582300937506
102 0.0018164239404875008
103 0.0017829769448653838
104 0.0018303930988165871
105 0.0017893320000211548
106 0.0018727349292259614
107 0.0018745909637668267
108 0.0018425366172147846
109 0.0019053739892581727
110 0.0018849885474855762
111 0.0018689524590103368
112 0.0019431807910961535
113 0.001951890517350962
114 0.0019308973497776446
115 0.0019990349471177894
116 0.002009245176572116
117 0.0020004240575882213
118 0.002020795320423557
119 0.0020148423748725963
120 0.002070277553975961
121 0.002112121992170673
122 0.002081609846093749
123 0.0020899822853341346
124 0.002214996736841347
125 0.002210968677028846
126 0.002204230691923077
127 0.0022059340675168264
128 0.002244672249610577
129 0.002243725570633895
130 0.002198417606970913
131 0.002326686848007212
132 0.002298981945014423
133 0.002412905193465384
134 0.0023317473012668287
135 0.0023255737818221145
136 0.0024042900543605767
137 0.0023814333208341345
138 0.002414946342495192
139 0.002451134140336538
140 0.002435468088014424
141 0.002541540709086779
142 0.0024759180712812523
143 0.002562872725209133
144 0.002554363054353367
145 0.002525350243064904
146 0.0026228594448966342
147 0.002640361090600963
148 0.0026968734518557683
149 0.002687729582449518
150 0.0026799173813848555
151 0.002751626483175481
152 0.0026916526068317286
153 0.002682602742860577
154 0.0027658840884567304
155 0.0028385319315024035
156 0.002733288245524039
157 0.002805041072350961
158 0.002798724552451201
159 0.00284738398885577
160 0.002833892571264423
161 0.0028506943730673084
162 0.0028578405825413463
163 0.0028141271324870197
164 0.0029047532288887
165 0.002916689246838943
166 0.003006111659274039
167 0.0030388357088942325
168 0.0030117903270181707
169 0.003023639132084136
170 0.0030182642660336535
171 0.0029788478969250015
172 0.003086049268993511
173 0.0030530940010240377
174 0.00309287048297596
175 0.0030892688902187473
176 0.0032070964353437493
177 0.0031308958387163454
178 0.003262165689711538
179 0.0032348496648947093
180 0.003334092027257212
181 0.0032702121678230764
182 0.0032887867663149036
183 0.00333782536743269
184 0.0033132179587812513
185 0.003400563164048078
186 0.003322215536028365
187 0.0033691419445264436
188 0.00340692471343654
189 0.003370118822997599
190 0.003414042435545674
191 0.003460621729710913
192 0.003487680921019232
193 0.0034814484875360595
194 0.003528280852358173
195 0.0035260558732403864
196 0.0035947047098653846
197 0.003583761358336538
198 0.003589446784643749
199 0.0035488957604610572
200 0.0036106514596322115
201 0.003633161542855769
202 0.003596668943564904
203 0.003621647520017789
204 0.0037260161142259616
205 0.0036873544761057684
206 0.003693311409786057
207 0.0037485618958747594
208 0.0037277801700697126
209 0.003731768419286058
210 0.0037200943660144225
211 0.0037368698886754786
212 0.0038266932486634626
213 0.003786905602120193
214 0.0038484308669038464
215 0.003837662506102065
216 0.003877989966946875
217 0.0038711451977908673
218 0.0039796825709810125
219 0.003955763375971154
220 0.003983664920576924
221 0.004019112007471154
222 0.003996646585913461
223 0.004061509550884613
224 0.004015245551199519
225 0.004009779120920672
226 0.004148229009661058
227 0.0040645974335312505
228 0.0041522345293678545
229 0.004216267765944711
230 0.004191517977733654
231 0.004280319721466346
232 0.004210795761447114
233 0.004258393462563462
234 0.004267925011272355
235 0.00427713419340625
236 0.004323331966394231
237 0.004361159201735935
238 0.004351708975694715
239 0.004359997178644953
240 0.00437384325853894
241 0.004375188742463941
242 0.004424559629495192
243 0.004461955226487498
244 0.004489655863850963
245 0.0045503420149230756
246 0.0045185560829999975
247 0.004506067166336778
248 0.004585396025798076
249 0.004530840472406252
250 0.0045934151490120215
251 0.004602146584228363
252 0.004643262102497593
253 0.004707265035608172
254 0.004766505116052884
255 0.004744165929896635
256 0.0047756718030625015
257 0.004802170611427885
258 0.004896239463478368
259 0.0048845448341901425
260 0.004845213594302884
261 0.004915008781204327
262 0.004838528640802884
263 0.0048121374747617796
264 0.004895357859576925
265 0.0048793476575266816
266 0.004958465852682693
267 0.005007965180538941
268 0.0049839032653341345
269 0.005068383734646637
270 0.00498556504900495
271 0.005014623260019232
272 0.005066327855785335
273 0.0050290740743365375
274 0.005152934708140861
275 0.005174238921781968
276 0.005123581464772355
277 0.005155969777822114
278 0.005169396608004327
279 0.00516497090489663
280 0.005145110646115385
281 0.005209611399110575
282 0.005163211771749997
283 0.005181044847507209
284 0.005281641245183894
285 0.005323840847189907
286 0.005230924322329326
287 0.005256136984014422
288 0.005374876757439424
289 0.0053137727444009615
290 0.005468482116127402
291 0.005453857539401205
292 0.005417081656274039
293 0.005393994523838937
294 0.005506909240446873
295 0.005449365350307692
296 0.005551215606367787
297 0.005505932791992786
298 0.0055918512302572145
299 0.005663100163579326
300 0.0056382443690432705
When I do
f(x) = a/b*(1-exp(-b*x))
fit[1:300] f(x) "cn20x2012" using 1:2 via a,b
The curve fits perfectly. But when I try to fit the curve with
a/b*(1-exp(-b*x/(3e-26))
I get the error message. Note that I've only added a constant to the exponential part of the function.
What can I do to fit the function with the constant 3e-26?
I'm using gnuplot 5.2 patchlevel 8 on linux
Adding that constant makes the values of exp(-b*x/(3.e-26) so close to zero that the term (1-exp(-b*x/(3e-26)) differs from 1 by less than the precision available for IEEE double precision floating point numbers. So you are essentially fitting the function g(x) = a/b, which is a very poor fit to your data.
Since you already have a good fit using your original function f(x), perhaps you can explain what your goal is to change the function to something else? What question are you trying to answer?

pandas read_csv not reading entire file

I have a really strange problem and don't know how to solve it.
I am using Ubuntu 18.04.2 together with Python 3.7.3 64-bit and use VScode as an editor.
I am reading data from a database and write it to a csv file with csv.writer
import pandas as pd
import csv
with open(raw_path + station + ".csv", "w+") as f:
file = csv.writer(f)
# Write header into csv
colnames = [par for par in param]
file.writerow(colnames)
# Write data into csv
for row in data:
file.writerow(row)
This works perfectly fine, it provides a .csv file with all the data I read from the database up to the current timestep. However in a later working step I have to read this data to a pandas dataframe and merge it with another pandas dataframe. I read the files like this:
data1 = pd.read_csv(raw_path + file1, sep=',')
data2 = pd.read_csv(raw_path + file2, sep=',')
And then merge the data like this:
comb_data = pd.merge(data1, data2, on="datumsec", how="left").fillna(value=-999)
For 5 out of 6 locations that I do this, everything works perfectly fine, the combined dataset has the same length as the two seperate ones. However for one location pd.read_csv seems not to read the csv files properly. I checked whether the problem is already in the database readout but everything is OK there, I can open both files with sublime and they have the same length, however when I read them with pandas.read_csv one shows less lines. The best part is, this problem is appearing totally random. Sometimes it works and reads the entire file, sometimes not. AND it occures at different locations in the file. Sometimes it stops after approx. 20000 entries, sometimes at 45000, sometimes somewhere else.. just totally random.
Here is an overview of my test output when I print all the lengths of the files
print(len(data1)): 57105
print(len(data2)): 57105
both values directly after read out from database, before writing it anywhere..
After saving the data as csv as described above and opening it in excel or sublime or anything I can confirm that the data contains 57105 rows. Everything is where it is supposed to be.
However if I try to read the data as with pd.read_csv
print(len(data1)): 48612
print(len(data2)): 57105
both values after reading in the data from the csv file
data1 48612
datumsec tl rf ff dd ffx
0 1538352000 46 81 75 288 89
1 1538352600 47 79 78 284 93
2 1538353200 45 82 79 282 93
3 1538353800 44 84 71 284 91
4 1538354400 43 86 77 288 96
5 1538355000 43 85 78 289 91
6 1538355600 46 80 79 286 84
7 1538356200 51 72 68 285 83
8 1538356800 52 71 68 281 73
9 1538357400 48 75 68 276 80
10 1538358000 45 78 62 271 76
11 1538358600 42 82 66 273 76
12 1538359200 43 81 70 274 78
13 1538359800 44 80 68 275 78
14 1538360400 45 78 66 279 72
15 1538361000 45 78 67 282 73
16 1538361600 43 79 63 275 71
17 1538362200 43 81 69 280 74
18 1538362800 42 80 70 281 76
19 1538363400 43 78 69 285 77
20 1538364000 43 78 71 285 77
21 1538364600 44 75 61 288 71
22 1538365200 45 73 56 290 62
23 1538365800 45 72 44 297 57
24 1538366400 44 73 51 286 57
25 1538367000 43 76 61 281 70
26 1538367600 40 79 66 284 73
27 1538368200 39 78 70 291 76
28 1538368800 38 80 71 287 81
29 1538369400 36 81 74 285 81
... ... .. ... .. ... ...
48582 1567738800 7 100 0 210 0
48583 1567739400 6 100 0 210 0
48584 1567740000 5 100 0 210 0
48585 1567740600 6 100 0 210 0
48586 1567741200 4 100 0 210 0
48587 1567741800 4 100 0 210 0
48588 1567742400 5 100 0 210 0
48589 1567743000 4 100 0 210 0
48590 1567743600 4 100 0 210 0
48591 1567744200 4 100 0 209 0
48592 1567744800 4 100 0 209 0
48593 1567745400 5 100 0 210 0
48594 1567746000 6 100 0 210 0
48595 1567746600 5 100 0 210 0
48596 1567747200 5 100 0 210 0
48597 1567747800 5 100 0 210 0
48598 1567748400 5 100 0 210 0
48599 1567749000 6 100 0 210 0
48600 1567749600 6 100 0 210 0
48601 1567750200 5 100 0 210 0
48602 1567750800 4 100 0 210 0
48603 1567751400 5 100 0 210 0
48604 1567752000 6 100 0 210 0
48605 1567752600 7 100 0 210 0
48606 1567753200 6 100 0 210 0
48607 1567753800 5 100 0 210 0
48608 1567754400 6 100 0 210 0
48609 1567755000 7 100 0 210 0
48610 1567755600 7 100 0 210 0
48611 1567756200 7 100 0 210 0
[48612 rows x 6 columns]
datumsec tl rf schnee ival6
0 1538352000 115 61 25 107
1 1538352600 115 61 25 107
2 1538353200 115 61 25 107
3 1538353800 115 61 25 107
4 1538354400 115 61 25 107
5 1538355000 115 61 25 107
6 1538355600 115 61 25 107
7 1538356200 115 61 25 107
8 1538356800 115 61 25 107
9 1538357400 115 61 25 107
10 1538358000 115 61 25 107
11 1538358600 115 61 25 107
12 1538359200 115 61 25 107
13 1538359800 115 61 25 107
14 1538360400 115 61 25 107
15 1538361000 115 61 25 107
16 1538361600 115 61 25 107
17 1538362200 115 61 25 107
18 1538362800 115 61 25 107
19 1538363400 115 61 25 107
20 1538364000 115 61 25 107
21 1538364600 115 61 25 107
22 1538365200 115 61 25 107
23 1538365800 115 61 25 107
24 1538366400 115 61 25 107
25 1538367000 115 61 25 107
26 1538367600 115 61 25 107
27 1538368200 115 61 25 107
28 1538368800 115 61 25 107
29 1538369400 115 61 25 107
... ... ... ... ... ...
57075 1572947400 -23 100 -2 -999
57076 1572948000 -23 100 -2 -999
57077 1572948600 -22 100 -2 -999
57078 1572949200 -23 100 -2 -999
57079 1572949800 -24 100 -2 -999
57080 1572950400 -23 100 -2 -999
57081 1572951000 -21 100 -1 -999
57082 1572951600 -21 100 -1 -999
57083 1572952200 -23 100 -1 -999
57084 1572952800 -23 100 -1 -999
57085 1572953400 -22 100 -1 -999
57086 1572954000 -23 100 -1 -999
57087 1572954600 -22 100 -1 -999
57088 1572955200 -24 100 0 -999
57089 1572955800 -24 100 0 -999
57090 1572956400 -25 100 0 -999
57091 1572957000 -26 100 -1 -999
57092 1572957600 -26 100 -1 -999
57093 1572958200 -27 100 -1 -999
57094 1572958800 -25 100 -1 -999
57095 1572959400 -27 100 -1 -999
57096 1572960000 -29 100 -1 -999
57097 1572960600 -28 100 -1 -999
57098 1572961200 -28 100 -1 -999
57099 1572961800 -27 100 -1 -999
57100 1572962400 -29 100 -2 -999
57101 1572963000 -29 100 -2 -999
57102 1572963600 -29 100 -2 -999
57103 1572964200 -30 100 -2 -999
57104 1572964800 -28 100 -2 -999
[57105 rows x 5 columns]
To me there is no obvious reason in the data why it should have problems reading the entire file and obviously there are none, considering that sometimes it reads the entire file and sometimes not.
I am really clueless about this. Do you have any idea how to cope with that and what could be the problem?
I finally solved my problem and as expected it was not within the file itself. I am using multiprocesses to run the named functions and some other things in parallel. The reading from database + writing to csv file and reading from csv file are performed in two different processes. Therefore the second process (reading from csv) did not know that the csv file was still being written and read only what was already available in the csv file. Because the file was opened by a different process it did not throw an exception when being opened.
I thought I already took care of this but obviously not thoroughly enough, excluding every possible case.
I had completely the same problem with a different application and also did not understand what was wrong, because sometimes it worked and sometimes it didn't.
In a for loop, I was extracting the last two rows of a dataframe that I was creating in the same file. Sometimes, the extracted rows where not the last two at all, but most of the times it worked fine. I guess the program started extracting the last two rows before the writing process was done.
I paused the script for half a second to make sure the writing process is done:
import time
time.sleep(0.5)
However, I don't think this is not a very elegant solution, since it might not be sufficient if somebody with a slower computer uses the script for instance.
Vroni, how did you solve this in the end, is there a way to define that a specific process must not be processed parallel with other tasks. I did not define anything about parallel processing in my program, so I think if this is the cause it is done automatically.

Generating all the combinations of 7 columns in a dataframe and add the corresponding rows to generate new columns

I have a dataframe that looks similar to below:
Wave A B C
340 77 70 15
341 80 73 15
342 83 76 16
343 86 78 17
I want to generate columns that will have all the possible combinations of the existing columns. I showed 3 cols here but in my actual data, I have 7 columns and therefore 127 total combinations. The desired output is as follows:
Wave A B C AB AC AD BC ... ABC
340 77 70 15 147 92 ...
341 80 73 15 153 95 ...
342 83 76 16 159 99 ...
I implemented a quite inefficient version where the user inputs the combinations (AB, AC, etc.) and a new col is created with the sum of the rows. This seems almost impossible to accomplish for 127 combinations, esp with descriptive col names.
Create a list of all combinations with chain + combinations from itertools, then sum the appropriate columns:
from itertools import combinations, chain
cols = [*df.iloc[:,1:]]
l = list(chain.from_iterable(combinations(cols, n+2) for n in range(len(cols))))
#[('A', 'B'), ('A', 'C'), ('B', 'C'), ('A', 'B', 'C')]
for items in l:
df[''.join(items)] = df.loc[:, items].sum(1)
Wave A B C AB AC BC ABC
0 340 77 70 15 147 92 85 162
1 341 80 73 15 153 95 88 168
2 342 83 76 16 159 99 92 175
3 343 86 78 17 164 103 95 181
You need to get the all combination first , then we just get the combination , and we need create the maps dict or Series
l=df.columns[1:].tolist()
l1=[list(map(list, itertools.combinations(l, i))) for i in range(len(l) + 1)]
d=[dict.fromkeys(y,''.join(y))for x in l1 for y in x ]
maps=pd.Series(d).apply(pd.Series).stack()
df.set_index('Wave',inplace=True)
df=df.reindex(columns=maps.index.get_level_values(1))
#here using reindex , get the order of your new df to the maps keys
df.columns=maps.tolist()
# here assign the new value to the column , since the order is same that why here I am assign it back
df.sum(level=0,axis=1)
Out[303]:
A B C AB AC BC ABC
Wave
340 77 70 15 147 92 85 162
341 80 73 15 153 95 88 168
342 83 76 16 159 99 92 175
343 86 78 17 164 103 95 181

python multiplication tables with while loops with different starting int

i need help with some homework. they want me to make a 10x10 multiplication tables using multiple while loops and nesting. they want the user to be prompted for the first number for the row and column. so if the you give 3 for column and 12 for the row it would look like this:
3 4 5 6 7 8 9 10 11 12
--------------------------------------------------
12| 36 48 60 72 84 96 108 120 132 144
13| 39 52 65 78 91 104 117 130 143 156
14| 42 56 70 84 98 112 126 140 154 168
15| 45 60 75 90 105 120 135 150 165 180
16| 48 64 80 96 112 128 144 160 176 192
17| 51 68 85 102 119 136 153 170 187 204
18| 54 72 90 108 126 144 162 180 198 216
19| 57 76 95 114 133 152 171 190 209 228
20| 60 80 100 120 140 160 180 200 220 240
21| 63 84 105 126 147 168 189 210 231 252
this is what i found with the internet search help:
row = int(input("Enter the first row number: " ))
while(row <= 10):
column = int(input("Enter the frist column number: "))
while(column <= 10):
if(row+column==0):
print('{:4s}'.format(''),end = '') #corner
elif(row*column==0):
print('{:4d}'.format(row+column),end = '') # border
else:
print('{:4d}'.format(row*column),end = '') # table
column=column+1
print()
row=row+1
if anyone could help me i would be very thankful
It should look something more like this:
row1 = int(input("Enter the first row number: " ))
column1 = int(input("Enter the first column number: "))
# TODO: print header
for row in range(row1, row1 + 10):
for column in range(column1, column1 + 10):
# TODO
That is, you only prompt for input twice, not (1+N) times, and you use the built-in function range() to generate the lists of rows and columns to iterate over.

Resources