I'm trying to implement an export to EPS feature (in C++), and I'm using the pdfmark (pdf) extensions for transparency, but I have yet to see them have any effect. I'm using Illustrator 14.0 and GSView 4.9 as clients. Even using this example from Adobe's docs produces no transparency.
%!PS-Adobe-3.0 EPSF-3.0
%%Creator: xan
%%Pages: 1
%%Orientation: Portrait
%%BoundingBox: 0 0 600 600
%%EndComments
%%Page: 1 1
/DeviceCMYK setcolorspace 15 setlinewidth
[ /ca .6 /CA .3 /BM /Normal /SetTransparency pdfmark
0 1 1 0 setcolor 220 330 150 0 360 arc fill % red
0 0 1 0 setcolor 320 503 150 0 360 arc fill % yellow
1 1 0 0 setcolor 420 330 150 0 360 arc fill % blue
1 0 0 0 setcolor 230 440 104 0 360 arc stroke % cyan
0 1 0 0 setcolor 410 440 104 0 360 arc stroke % magenta
0 0 1 0 setcolor 320 284 104 0 360 arc stroke % yellow
%%EOF
Is there another flag I need to set? Or is it just a problem with these clients?
I have found that using pdfmark for transparency does work in Adobe Distiller (which converts EPS to PDF) if you replace
/AllowTransparency false
with
/AllowTransparency true
in the .joboptions settings file.
Related
I have drawn a representation of a cogwheel, and I am unable to fill the area I want filled. You can look at it here:
https://jsfiddle.net/9k451fb6/
I want the portion filled outside of the "hole" in the center, out to the cogs, whereas the "hole" gets filled, along with portions of the edges of the cogs (which in itself is curious to me, as the path is a single complete path with a single close (z) at the end, so why does it seem like each cog section has been closed?)
I have tried the options of fill-rule, nonzero and evenodd, but nothing changes.
This is the code I'm using. Note that it is drawn with a single path. However I have tried both this method, and closing the path (inserting a z) just before drawing the circle in the middle ("hole"):
<svg id="cogwheel_1" viewBox="0 0 300 300">
<path id="arc_path" stroke="#ff0000" stroke-width="2" fill="blue" fill-rule="evenodd" d="M 120 5 A 30 30 0 0 0 179 5 L 211 15 M 211 15 A 30 30 0 0 0 259 50 L 278 77 M 278 77 A 30 30 0 0 0 296 133 L 296 166 M 296 166 A 30 30 0 0 0 278 222 L 259 249 M 259 249 A 30 30 0 0 0 211 284 L 179 294 M 179 294 A 30 30 0 0 0 120 294 L 88 284 M 88 284 A 30 30 0 0 0 40 249 L 21 222 M 21 222 A 30 30 0 0 0 3 166 L 3 133 M 3 133 A 30 30 0 0 0 21 77 L 40 50 M 40 50 A 30 30 0 0 0 88 15 L 120 5 M 150 200 A 50 50 0 1 0 149 200 z"></path>
</svg>
I think this is what you want to achieve:
<svg id="cogwheel_1" viewBox="0 0 300 300">
<path id="arc_path" stroke="#ff0000" stroke-width="2" fill="blue" fill-rule="evenodd" d="M 120 5 A 30 30 0 0 0 179 5 L 211 15
A 30 30 0 0 0 259 50 L 278 77
A 30 30 0 0 0 296 133 L 296 166
A 30 30 0 0 0 278 222 L 259 249
A 30 30 0 0 0 211 284 L 179 294
A 30 30 0 0 0 120 294 L 88 284
A 30 30 0 0 0 40 249 L 21 222
A 30 30 0 0 0 3 166 L 3 133
A 30 30 0 0 0 21 77 L 40 50
A 30 30 0 0 0 88 15 L 120 5
M 150 200 A 50 50 0 1 0 149 200 z"></path>
</svg>
I've removed the M commands between cog's "teeths". By moving to a new point for every tooth you were forcing the filling of that fragment.
Any way it was already answered I'm going to leave here my approach. The problem with your svg was that every teeth corner of the cogwheel started a new sub path which nodes where at the same possition of the next teeth but were not connected with eachother. I just opened the file in Inkscape, selected all nodes and joined with the corresponding tool. Notice for the future: Any time you see a similar behaviour, mainly with svgs exported by Illustrator, CorellDraw and some online editors, you can be sure that some where in the path there are overlaped nodes but not conected.
Notice the difference between the two corners. The one above the two nodes are not connected.
So what you really got was a series of sub paths filled in blue as the arrow shows.
And this is the code just as SVGO cleaned it after Inkscape have saved it.
<svg id="cogwheel_1" viewBox="0 0 300 300">
<path id="arc_path" d="M 264.07142,186.6301 C 259.69918,200.23262 265.52681,215.03117 278,222 l -19,27 C 248.58911,238.28353 231.87939,236.86124 219.80706,245.66397 207.73474,254.46671 203.97989,270.81109 211,284 l -32,10 c -2.6299,-14.22385 -15.03507,-24.54564 -29.5,-24.54564 -14.46493,0 -26.8701,10.32179 -29.5,24.54564 L 88,284 C 95.020113,270.81109 91.265258,254.46671 79.192936,245.66397 67.120613,236.86124 50.410887,238.28353 40,249 L 21,222 C 33.47319,215.03117 39.300816,200.23262 34.928577,186.6301 30.556338,173.02758 17.197624,164.39607 3,166 l 0,-33 c 14.197624,1.60393 27.556338,-7.02758 31.928577,-20.6301 C 39.300816,98.767378 33.47319,83.968835 21,77 L 40,50 C 50.410887,60.716466 67.120613,62.138761 79.192936,53.336026 91.265258,44.533291 95.020113,28.188906 88,15 L 120,5 c 2.6299,14.223853 15.03507,24.545641 29.5,24.545641 14.46493,0 26.8701,-10.321788 29.5,-24.545641 l 32,10 c -7.02011,13.188906 -3.26526,29.533291 8.80706,38.336026 C 231.87939,62.138761 248.58911,60.716466 259,50 l 19,27 C 265.52681,83.968835 259.69918,98.767378 264.07142,112.3699 268.44366,125.97242 281.80238,134.60393 296,133 l 0,33 c -14.19762,-1.60393 -27.55634,7.02758 -31.92858,20.6301 z M 150,200 c 27.51544,-0.27517 49.63706,-22.73389 49.4979,-50.23579 -0.13917,-27.5005 -22.48529,-49.69493 -49.9979,-49.69493 -27.51261,0 -49.858733,22.19443 -49.997895,49.69493 C 99.362936,177.26611 121.48456,199.72483 149,200 Z" style="fill:#0000ff;fill-rule:evenodd;stroke:#ff0000;stroke-width:2"/>
</svg>
And, as a final advice, as I always recommend, if svg was created for the web, and Inkscape produces native svg files ready for the web, why people is still struggling with the awful overcomplicated unhuman svg files produced by propietary software?
I have some bladefs volume and I just checked /proc/self/mountstats where I see statistics per operations:
...
opts: rw,vers=3,rsize=131072,wsize=131072,namlen=255,acregmin=1800,acregmax=1800,acdirmin=1800,acdirmax=1800,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.2.100,mountvers=3,mountport=903,mountproto=tcp,local_lock=all
age: 18129
caps: caps=0x3fc7,wtmult=512,dtsize=32768,bsize=0,namlen=255
sec: flavor=1,pseudoflavor=1
events: 18840 116049 23 5808 22138 21048 146984 13896 287 2181 0 7560 31380 0 9565 5106 0 6471 0 0 13896 0 0 0 0 0 0
bytes: 339548407 48622919 0 0 311167118 48622919 76846 13896
RPC iostats version: 1.0 p/v: 100003/3 (nfs)
xprt: tcp 875 1 7 0 0 85765 85764 1 206637 0 37 1776 35298
per-op statistics
NULL: 0 0 0 0 0 0 0 0
GETATTR: 18840 18840 0 2336164 2110080 92 8027 8817
SETATTR: 0 0 0 0 0 0 0 0
LOOKUP: 21391 21392 0 3877744 4562876 118 103403 105518
ACCESS: 20183 20188 0 2584304 2421960 72 10122 10850
READLINK: 0 0 0 0 0 0 0 0
READ: 3425 3425 0 465848 311606600 340 97323 97924
WRITE: 2422 2422 0 48975488 387520 763 200645 201522
CREATE: 2616 2616 0 447392 701088 21 870 1088
MKDIR: 858 858 0 188760 229944 8 573 705
SYMLINK: 0 0 0 0 0 0 0 0
MKNOD: 0 0 0 0 0 0 0 0
REMOVE: 47 47 0 6440 6768 0 8 76
RMDIR: 23 23 0 4876 3312 0 3 5
RENAME: 23 23 0 7176 5980 0 5 6
LINK: 0 0 0 0 0 0 0 0
READDIR: 160 160 0 23040 4987464 0 16139 16142
READDIRPLUS: 15703 15703 0 2324044 8493604 43 1041634 1041907
FSSTAT: 1 1 0 124 168 0 0 0
FSINFO: 2 2 0 248 328 0 0 0
PATHCONF: 1 1 0 124 140 0 0 0
COMMIT: 68 68 0 9248 10336 2 272 275...
about my bladefs. I am interested in READ operation statistics. As I know the last column (97924) means:
execute: How long ops of this type take to execute (from
rpc_init_task to rpc_exit_task) (microsecond)
How to interpret this? Is it the average time of each read operation regardless of the block size? I have very strong suspicion that I have problems with NFS: am I right? The value of 0.1 sec looks bad for me, but I am not sure how exactly to interpret this time: average, some sum...?
After reading the kernel source, the statistics are printed from net/sunrpc/stats.c rpc_clnt_show_stats() and the 8th column of per-op statistics statistics seems to printed from _print_rpc_iostats, it's printing struct rpc_iostats member om_execute. (The newest kernel has 9 columns with errors on the last column.)
That member looks to be only referenced/actually changed in rpc_count_iostats_metrics with:
execute = ktime_sub(now, task->tk_start);
op_metrics->om_execute = ktime_add(op_metrics->om_execute, execute);
Assuming ktime_add does what it says, the value of om_execute only increases. So the 8th column of mountstats would be the sum of the time of operations of this type.
We have deployed a global Apache Cassandra cluster (node: 12, RF: 3, version: 3.11.2) in our production environment. We are running into an issue where running major compaction on column family is failing to clear tombstones from one node (out of 3 replicas) even though metadata information shows min timestamp passed gc_grace_seconds set on the table.
Here is sstable metadata output
SSTable: mc-4302-big
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Bloom Filter FP chance: 0.010000
Minimum timestamp: 1
Maximum timestamp: 1560326019515476
SSTable min local deletion time: 1560233203
SSTable max local deletion time: 2147483647
Compressor: org.apache.cassandra.io.compress.LZ4Compressor
Compression ratio: 0.8808303792058351
TTL min: 0
TTL max: 0
First token: -9201661616334346390 (key=bca773eb-ecbb-49ec-9330-cc16da310b58:::)
Last token: 9117719078924671254 (key=7c23b975-5354-4c82-82e5-1762bac75a8d:::)
minClustringValues: [00000f8f-74a9-4ce3-9d87-0a4dabef30c1]
maxClustringValues: [ffffc966-a02c-4e1f-bdd1-256556624288]
Estimated droppable tombstones: 46.31761624099541
SSTable Level: 0
Repaired at: 0
Replay positions covered: {}
totalColumnsSet: 0
totalRows: 618382
Estimated tombstone drop times:
1560233680: 353
1560234658: 237
1560235604: 176
1560236803: 471
1560237652: 402
1560238342: 195
1560239166: 373
1560239969: 356
1560240586: 262
1560241207: 247
1560242037: 387
1560242847: 357
1560243742: 280
1560244469: 283
1560245095: 353
1560245957: 357
1560246773: 362
1560247956: 449
1560249034: 217
1560249849: 310
1560251080: 296
1560251984: 304
1560252993: 239
1560253907: 407
1560254839: 977
1560255761: 671
1560256486: 317
1560257199: 679
1560258020: 703
1560258795: 507
1560259378: 298
1560260093: 2302
1560260869: 2488
1560261535: 2818
1560262176: 2842
1560262981: 1685
1560263708: 1830
1560264308: 808
1560264941: 1990
1560265753: 1340
1560266708: 2174
1560267629: 2253
1560268400: 1627
1560269174: 2347
1560270019: 2579
1560270888: 3947
1560271690: 1727
1560272446: 2573
1560273249: 1523
1560274086: 3438
1560275149: 2737
1560275966: 3487
1560276814: 4101
1560277660: 2012
1560278617: 1198
1560279680: 769
1560280441: 1337
1560281033: 608
1560281876: 2065
1560282546: 2926
1560283128: 6305
1560283836: 824
1560284574: 71
1560285166: 140
1560285828: 118
1560286404: 83
1560295835: 72
1560296951: 456
1560297814: 670
1560298496: 271
1560299333: 473
1560300159: 284
1560300831: 127
1560301551: 536
1560302309: 425
1560303302: 860
1560304064: 465
1560304782: 319
1560305657: 323
1560306552: 236
1560307454: 368
1560308409: 320
1560309178: 210
1560310091: 177
1560310881: 85
1560311970: 147
1560312706: 76
1560313495: 88
1560314847: 687
1560315817: 1618
1560316544: 1245
1560317423: 5361
1560318491: 2060
1560319595: 5853
1560320587: 5390
1560321473: 3868
1560322644: 5784
1560323703: 6861
1560324838: 7200
1560325744: 5642
Count Row Size Cell Count
1 0 3054
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
7 0 0
8 0 0
10 0 0
12 0 0
14 0 0
17 0 0
20 0 0
24 0 0
29 0 0
35 0 0
42 0 0
50 0 0
60 98 0
72 49 0
86 46 0
103 2374 0
124 39 0
149 36 0
179 43 0
215 18 0
258 26 0
310 24 0
372 18 0
446 16 0
535 19 0
642 27 0
770 17 0
924 12 0
1109 14 0
1331 23 0
1597 20 0
1916 12 0
2299 11 0
2759 11 0
3311 11 0
3973 12 0
4768 5 0
5722 8 0
6866 5 0
8239 5 0
9887 6 0
11864 5 0
14237 10 0
17084 1 0
20501 8 0
24601 2 0
29521 2 0
35425 3 0
42510 2 0
51012 2 0
61214 1 0
73457 2 0
88148 3 0
105778 0 0
126934 3 0
152321 2 0
182785 1 0
219342 0 0
263210 0 0
315852 0 0
379022 0 0
454826 0 0
545791 0 0
654949 0 0
785939 0 0
943127 0 0
1131752 0 0
1358102 0 0
1629722 0 0
1955666 0 0
2346799 0 0
2816159 0 0
3379391 1 0
4055269 0 0
4866323 0 0
5839588 0 0
7007506 0 0
8409007 0 0
10090808 1 0
12108970 0 0
14530764 0 0
17436917 0 0
20924300 0 0
25109160 0 0
30130992 0 0
36157190 0 0
43388628 0 0
52066354 0 0
62479625 0 0
74975550 0 0
89970660 0 0
107964792 0 0
129557750 0 0
155469300 0 0
186563160 0 0
223875792 0 0
268650950 0 0
322381140 0 0
386857368 0 0
464228842 0 0
557074610 0 0
668489532 0 0
802187438 0 0
962624926 0 0
1155149911 0 0
1386179893 0 0
1663415872 0 0
1996099046 0 0
2395318855 0 0
2874382626 0
3449259151 0
4139110981 0
4966933177 0
5960319812 0
7152383774 0
8582860529 0
10299432635 0
12359319162 0
14831182994 0
17797419593 0
21356903512 0
25628284214 0
30753941057 0
36904729268 0
44285675122 0
53142810146 0
63771372175 0
76525646610 0
91830775932 0
110196931118 0
132236317342 0
158683580810 0
190420296972 0
228504356366 0
274205227639 0
329046273167 0
394855527800 0
473826633360 0
568591960032 0
682310352038 0
818772422446 0
982526906935 0
1179032288322 0
1414838745986 0
Estimated cardinality: 3054
EncodingStats minTTL: 0
EncodingStats minLocalDeletionTime: 1560233203
EncodingStats minTimestamp: 1
KeyType: org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type)
ClusteringTypes: [org.apache.cassandra.db.marshal.UUIDType]
StaticColumns: {}
RegularColumns: {}
So far here is what we have tried,
1) major compaction with lower gc_grace_seconds
2) nodetool garbagecollect
3) nodetool scrub
None of the above methods is helping. Again, this is only happening for one node (out of total 3 replicas)
The tombstone markers generated during your major compaction are just that, markers. The data has been removed but a delete marker is left in place so that the other replicas can have gc_grace_seconds to process them too. The tombstone markers are fully dropped the next time the SSTable is compacted. Unfortunately because you've run a major compaction (rarely ever recommended) it may be a long time until there are suitable SSTables for compaction with it to clean up the tombstones. Remember that the tombstone drop will also only happen after local_delete_time + gc_grace_seconds as defined by the table.
If you're interested in learning more about how tombstones and compaction work together in the context of delete operations I suggest reading the following articles:
https://docs.datastax.com/en/archived/cassandra/3.0/cassandra/dml/dmlAboutDeletes.html
https://thelastpickle.com/blog/2016/07/27/about-deletes-and-tombstones.html
plotFile_bin.p
1 #Chart properties
2 set title "Cumulative binning"
3 set terminal svg size 1200,1800
4 set out '/ethCfmTopo/2way_delay/data/spoke-ntp/cumulative_binning.svg'
5 set multiplot layout 10,1 title "Multiplot"
6 set autoscale y
7 set autoscale x
8 set ylabel "Iterations"
9 set xlabel "Round trip ms"
10 set style data histogram
11 set style fill solid border
12 set xtics scale 0 nomirror rotate by -45
13 plot for [COL=2:9] '/ethCfmTopo/2way_delay/data/spoke-ntp/data_binningbin.log' using COL:xticlabels(1) title columnheader, for [COL=2:9] '' using 0:COL:COL w labels title columnheader
while running the above code i am getting ':' expected error, Searched it in many forums not getting any clue. Can anyone please help on this ?
Error Output
#gnuplot plotFile_bin.p
plot for [COL=2:9] '/ethCfmTopo/2way_delay/data/spoke-ntp/data_binningbin.log' using COL:xticlabels(1) title columnheader, for [COL=2:9] '' using 0:COL:COL w labels title columnheader
^
"plotFile_bin.p", line 13: ':' expected
below is my datafile:
0 0 0 0 0 0 0 0 0
1.01-1.1 0 0 0 0 0 0 0 0
1.1-1.2 1 2 4 6 0 2 3 3
1.2-1.3 173 168 188 248 189 234 206 216
1.3-1.4 1529 1638 1755 1765 1816 1842 1683 1662
1.4-1.5 785 671 546 463 479 408 597 600
1.5-1.6 1 4 0 1 5 3 3 0
1.6-1.7 1 0 0 0 0 0 0 0
1.7-1.8 1 1 1 2 3 0 0 1
1.8-1.9 2 0 4 1 3 3 0 3
1.9-2 0 1 0 2 4 1 0 2
2-2.3 1 1 0 0 0 0 1 0
2.3-2.6 0 0 1 0 0 0 0 0
2.6-3 0 0 0 0 0 0 0 0
3-4 0 0 0 0 0 0 1 0
4-5 1 0 0 0 0 0 0 0
5-7 0 0 0 0 0 0 0 0
7-10 1 1 1 1 0 0 0 1
10-16 4 13 0 9 0 7 6 11
16-21 0 0 0 2 1 0 0 1
21-31 0 0 0 0 0 0 0 0
>31 0 0 0 0 0 0 0 0
Update to at least gnuplot 4.4.0. The support for iteration (plot for [COL=2:9] ...) was added there.
What I'm trying to do is process a downloaded image at a pixel level, in a vb6 console application, these are the steps currently
I download a png file from a website, this comes down as a 32 bit image, the image is grey scale, when the image is first
downloaded I need to convert it to a 24 bit Bitmap in order to be able to process it, if I open the image in Paint and save
it as a 24 bit bitmap I'm able to process the image without issue, when saving in Paint I dod get a message when saving that
says "Any Transparency will be lost if you save this picture.", Once saved, I then do a binary read on the file then extract the
BITMAPFILEHEADER
BITMAPINFOHEADER
BMPData
I can then loop through the BMPData and extract and process the needed value's. If I try to load the image without first saving
it to a 24 bit bitmap in paint, I receive a an out of memory error in VB (Run-Time Error 7) and the BMPInfoHeader.biSizeImage is
very big compaired to the size it is after with has been saved via paint.
I have tried to convert with ImageMagick-6.9.0-Q16 ad and GraphicsMagick-1.3.21-Q8 but these just seem to mess up the converted file.
Anyone got any idea's ?
Regards
Potman100
I have tried everything I know to get ImageMagick to create a 24-bit colour BMP from a greyscale 32-bit PNG file, including:
+matte to remove the alpha channel
-type truecolor to force a colour output
BMP4:output.bmp to force a BMP4
BMP3:output.bmp to force a BMP3
BMP2:output.bmp to force a BMP2
-colors 16|256 to force number of colours
-depth 8|24 to set the bit depth
passing through PPM format and back to get rid of PNG settings
writing via NetPBM's ppm2bmp/ppmtobmp
and all to no avail. I cannot get ImageMagick to create a 24-bit color BMP from a grey 32-bit PNG.
Update
I have worked out a horrible hack that will make you a 24-bit BMP, as follows. In a greyscale image, the Red=Green=Blue intensity for every pixel. If we decide to multiply every Blue pixel by 0.99, then, with just 1% error, the blue pixel will be less than the red and green, so ImageMagick will simply be unable to use a grayscale image to save the BMP in... like this:
convert input.png +matte -channel B -evaluate multiply 0.99 image.bmp
I don't know if that will work for you, if so, good, stop here. If not, continue...
The only thing I can suggest is that you use PPM format which is even easier than BMP for you to read from
convert yourImage.png output.ppm
That will get you a P6 type PPM image, as described here. If you want the even easier ASCII P3 version, use this command
convert yourImage.png -compress none output.ppm
and your image will look like this:
P3
384 128
255
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 255 255 255
Wow, thanks for all the time and effort you have taken to try and get a working solution for me,
I have tried the above and probably the one that will work is
convert yourImage.png -compress none output.ppm
I would have to do some further testing to check the values, but I think it would work.
The need to be able to create a 24bit bitmap is part of a project I am doing, the site I get
the png image from generates this image to offer a £ price for an item that I may sell to
them, so the png will contain a £ Value (i.e £ 1.56), I wanted to be able to work out the
value of the offer in a dos program I have written, so it can be called from php.
I did find a way to do it, I use xmlhttp to download the image via an adodb stream and found
that I could save the image in a number of different formats other than a bmp, so I looked
at the format of different image formats and found that the tif file format look promising
and found some code online that used Microsoft Image Acquistion Library to manipulate the tif
file.
By loading the tif file in to a WIA.ImageFile and applying a WIA.Vector to it, you can then
itterate the pixels in the image and get the raw color values, by adding the values together
by column and taking into account that you have column's that are all white, these just being
a blank column I was able to generate a unique value per character.
Thanks for all your help.
Potman100