In the environment I'm working in, the implemented Lua functions will supply me with OLE colour codes whenever I ask it what colour is any given word, but, on the other hand, expect me to supply it with RGB colour codes whenever I want to colour any given word.
So far I've been googling the OLE colour codes in order to find their respective pages (along with their RGB colour codes) in htmlcsscolor.com because even though this site has the information it won't allow me to search a colour by its OLE colour code.
Is there a quicker way (maybe a function or at least instructions on how to convert) of retrieving a colour's RGB code from its OLE code (using Lua if possible)?
The OLE Color code can be converted to an "RGB color code" (which
is an ambiguous term here, because Lua has no inherent concept
of colors) as follows:
The red component is ole_color % 256.
The green component is (ole_color / 256) % 256.
The blue component is (ole_color / 65536) % 256.
Each component ranges from 0 to 255.
(Note that shifts and the bitwise AND would be better here, but Lua
doesn't support bitwise operations without aid from a helper library;
depending on what program uses Lua, the program may provide
built-in functions for bitwise operations.)
Related
In my quest to add alpha capacity to my image blending tools in Matlab, I've come across a bit of a snag. Among others, I've been using these links as my references as to how foreground and background alpha plays into the composition of both the output color data and output alpha.
My original approach was to simply use a a Src-Over composition for "normal" blend mode and Src-Atop composition for other modes. When compared to the output from GIMP, this produced similar, but differing results. The output alpha matches, but the RGB data differs.
Specifically, the foreground's color influence over the background is zero where the background alpha is zero. After spending a few hours looking naively through the GIMP 2.8.10 source, I notice a few things that confuse me.
Barring certain modes and a few ancillary things that happen during export that I haven't gleaned in the code yet, the approach is approximately thus:
if ~normalmode
FGalpha = min(FGalpha, BGalpha); % << why this?
end
FGalpha = FGalpha * mask * opacity;
OUTalpha = BGalpha + (1 - BGalpha) * FGalpha;
ratio = FGalpha / (OUTalpha + eps);
OUT = OUT * ratio + BG * (1 - ratio);
if normalmode
OUT = cat(3, OUT, OUTalpha);
else
OUT = cat(3, OUT, BGalpha);
end
The points of curiosity lie in the fact that I don't understand conceptually why one would take the minimum of layer alphas for composition. Certainly, this approach produces results which match GIMP, but I'm uncomfortable establishing this as a default behavior if I don't understand the reasoning.
This may be best asked of a GIMP forum somewhere, but I figured it would be more fruitful to approach a general audience. To clarify and summarize:
Does it make sense that colors in a transparent BG region are unaffected by multiplication with an opaque foreground color? Wouldn't this risk causing bleeding of unaltered data near hard mask edges with some future operation?
Although I haven't found anything, are there other applications
out there that use this approach?
Am I wrong to use GIMP's behavior as a reference? I don't have PS to
compare against, and ImageMagick is so flexible that it doesn't
really suggest a particular expected behavior. Certainly, GIMP has
some things it does incorrectly; maybe this is something else that
may change.
EDIT:
I can at least answer the last question by obviating it. I've decided to add support for both SVG 1.2 and legacy GIMP methods. The GEGL methods to be used by GIMP in the future follow the SVG methods, so I figure that suggests the propriety of the legacy methods.
For what it's worth, the SVG methods are all based on a Porter-Duff Src-Over composition. If referring to the documentation, the fact that the blend math is the same gets obfuscated because the blend and composition are algebraically combined using premultiplied alpha to reduce the overall computational cost. With the exception of SoftLight, the core blend math is the same as those used by GIMP and elsewhere.
Any other blend operation (e.g. PinLight, Hue) can be made compatible by just doing:
As = Sa * (1 - Da);
Ad = Da * (1 - Sa);
Ab = Sa * Da;
Ra = As + Ad + Ab; % output alpha
Rc = ( f(Sc,Dc)*Ab + Sc*As + Dc*Ad ) / Ra;
and then doing some algebra if you want to simplify it.
I am having some strange problems with excel's solver. Basically what I am trying to do is curve fit my data. I have two different lines, one is my calibration line and the other is the derived line that I am attempting to match up to the calibration line. My line depends on 19 different variable parameters (Perhaps this is too many? I have tried fewer without result) and I am using solver to adjust these parameters to make the two lines as close as possible.
For Example:
The QP column contains the variables I would like changed, changing these will draw me closer or further from the calibration curve. Each subsequent value of QP must be greater than the first.
Col=B Col=C
Power .QP_'
1 ..... 57000
2 ..... 65000
3 ..... 70000
4 ..... 80000
5 ..... 80000
Therefore my excel solver parameters look like this: C1:C19>=0,C1:C19<=100000 and C2>=C1, C3>=C2,C4>=C3... I have also tried making another column of the differences between each value and then saying that these must be diff>=0.
To compare this with my calibration curve I have taken the calibration curve data and subtracted my data derived from QP and then squared that to create my sum of the squares error. For example:
(Calibration-DerivedQP)^2=SS(x) <- where x represents the row number
Sum(SS(x))=SSE
SSE is what I have set solver to minimize. And upon changing QP everything automatically updates. There are no if statements being used and no pivot tables are used.
If I remove the parameters similar to C2>=C1 everything works perfectly, except the derived values are not feasible. But when the solver is run with these parameters, nothing gets changed and no matter which guesses I used as starting values ( so that I can ensure I haven't guessed a local minimum), the solver cannot improve upon my solution. This has led me to believe that something in my parameters is being broken, since I can very easily improve on my solution by guess and check. The rest of solvers settings are at the defaults, and the evolutionary method is used since my curve isn't smooth (I don't think) I had this working in the past and now something seems to be broken. Any ideas are appreciated! Thank you so much! Sorry if I am missing any critical information. I am also familiar with matlab and R if there are better methods in those languages.
I found the solution to my problem. I don't know if this will be helpful to anyone else since my problem vague and pretty specific to me. That being said, my problem was in the constraints. I changed some data on my excel sheet to allow for fewer constraints. An example might look like this:
Guess..........Squared......Added..................Q
-12..............(-12)^2....... 0
-16..............(-16)^2.......=(-16)^2+0.............256
+7.................(7)^2..........=(7)^2+(-16)^2+0....305
Now I allow solver to guess any number subject to minimal constraints.
Essentially, what is happening now, is the excel sheet allows for any guess that solver makes to work. By squaring the numbers it give me positive values, and the added column ensures that each successive value is equal to or greater than the first. This means there are very few constraints. I also changed the solver option from evolutionary to GRG Nonlinear.
Tips for getting solver to work:
Try and use the spreadsheet to set constraints (other than bounds, bounds seem to be good) wherever possible, the more constraints that I set in solver, the less likely my solution was to work.
Hope that helps, sorry if I have provided any incorrect information.
I am working on a project, which is based on optix. I need to use progressive photon mapping, hence I am trying to use the Progressive Photon Mapping from the samples, but the transparency material is not implemented.
I've googled a lot and also tried to understand other samples that contains transparency material (e.g. Glass, Tutorial, whitted). At last, I got the solution as follows;
Find the hit point (intersection point) (h below)
Generate another ray from that point
use the color of the new generated points
By following you can also find the code of that part, by I do not understand why I get black color(.0f, .0f, 0.f) for the new generated ray (part 3 above).
optix::Ray ray( h, t, rtpass_ray_type, scene_epsilon );
HitPRD refr_prd;
refr_prd.ray_depth = hit_prd.ray_depth+1;
refr_prd.importance = importance;
rtTrace( top_object, ray, refr_prd );
result += (1.0f - reflection) * refraction_color * refr_prd.attenuation;
Any idea will be appreciated.
Please note that refr_prd.attenuation should contains some colors, after using function rtTrace(). I've mentioned reflection and reflaction_color to help you better understand the procedure. You can simply ignore them.
There are a number of methods to diagnose your problem.
Isolate the contribution of the refracted ray, by removing any contribution of the reflection ray.
Make sure you have a miss program. HitPRD::attenuation needs to be written to by all of your closest hit programs and your miss programs. If you suspect the miss program is being called set your miss color to something obviously bad ([1,0,1] is my favorite).
Use rtPrintf in combination with rtContextSetPrintLaunchIndex or setPrintLaunchIndex to print out the individual values of the product to see which term is zero from a given pixel. If you don't restrict the output to a given launch index you will get too much output. You probably also want to print out the depth as well.
I have to store colors in database.
How could I store a color in a best manner in the database field?, by color name or something else??
If its for a HTML Page, storing the #RRGGBB tag as a string is probably enough.
If its for .NET , it supports building a color from its ARGB Value
System.Drawing.Color c = System.Drawing.Color.FromArgb(int);
int x = c.ToArgb();
so you could just store that int.
The ideal storage format depends on how you plan to use the database.
The simplest solution is of course just storing everything as a 6-byte ASCII hex string of the RGB color, without support for any other format. Though you may run into issues if you later wanted to support additional formats.
For readability, flexibility, and ease of access, using a plain string is a good idea. The difference in storage space between a hex color string and a raw integer is negligible in most cases. For a speed boost you can set the color field to be indexed. And for flexibility, you could add one or more of the following features:
Assume the contextual default color if NULL, blank, or invalid
Accept an optional trailing alpha byte 00-FF, assuming FF (opaque) if omitted
Accept both full (AABBCC) and shorthand (ABC) syntax, which is half the size, faster to type, and supported by CSS
Support an optional leading # digit, which is common
Support raw strings, to hold anything CSS supports like "rgba(255,255,255,1)" or "red"
Support custom color mode strings like "cmyk(), hsv(), hsl(), lab()", etc.
Assume RGB(A) hex strings if it begins with a # or the length is 3, 4, 6, or 8 and contains only [0-9A-Fa-f]
To optimize search and sort speed, as well as disk use, storing as an unsigned integer is the way to go. This is because a single number is faster to search through than a string of characters, can be stored internally as small as a few bits, and you can still filter by color channels in your queries using FromArgb() and similar functions. The downside is your code then needs to constantly convert things back and forth for every color field in every query, which may actually offset any database speed gain.
A hybrid approach may be worth exploring. For example, consider a table of all possible 8-bit-per-channel RGB values, with fields composed of things like id, rgbhex, cssname, cmyk, hsl, hsv, lab, rgb, etc. You'd need to automate the creation of such a table since it would be so large (16777216 entries). It would add over 16 MB to your table, but the advantage to this solution is that all your color values would just be a single integer id field linked with a foreign key to the color lookup table. Fast sorts and searches, any color data you need without any conversion, and extremely extensible. You could also keep the table in its own database file to be shared by any other database or script in your application. Admittedly this solution is overkill for most cases.
Probably the colour value would be best, e.g. #FFFFFF or #FF0000
Store a colour as a 24 or 32 bit integer, like in HTML/CSS i.e. #FF00CC but converted to an integer not a string.
Integers will take up less space then strings (especially VCHARs).
Store it as an int
Use ToArgb and FromArgb to set and get the values.
I think it depends. If you just need to store the color, then hex notation should be fine. If you need to perform queries against specific color channels, then you'd want smallint fields for each color channel (be it RGB, ARGB, CYMK, etc).
So, for simple storage, keep it simple. If you need to perform analysis, you'll need to consider alternate options as dictated by your problem domain.
I suggest having a 3 column color lookup table:
ID int;
Name varchar(40) null;
ColorVal char(8) or int (depending on how you're representing colors)
For unnamed colors just leave the name field null
I store it as a char(9).
Included the '#'-sign so that I don't have to prepend it in code and use it immediately
Normal char instead of nchar
Stores the transparancy
I'd go for hexadecimal notation if the colors are limited to web colors.
So for example #0000FF for blue.
More info here: http://en.wikipedia.org/wiki/Web_colors
What format are you looking to store the colors in? CMTK, RGB, Pantone?
It kinda helps to know... the strictly #RGB hex format works great if its for web colors or an application but not so good if you're trying to mix paints.
Why don't you use both? Table structure would be Int ARGB for the Key and a varchar for the Name.
ARGB (Key), Name
FFFFFFFF ,Black
FF000000 ,White
In my engine I have a need to be able to detect DXT1 textures that have texels with 0 alpha (e.g. a cutout for a window frame). This is easy for textures I compress myself, but I'm not sure about textures that are already compressed.
Is there an easy way to tell from the header whether a DDS image contains alpha?
As far as I know, there's no way to tell from the header. There's a DDPF_ALPHAPIXELS flag, but I don't think that will get set based on what's in the pixel data. You'd need to parse the DXT1 blocks, and look for colours that have 0 alpha in them (making sure to check that the colour is actually used in the block, too, I suppose).
No, the DDS header only uses alpha flags for uncompressed images. I had a similar need to figure out if a DXT1 image was using 1-bit alpha and after a long search I came across this reference here: https://msdn.microsoft.com/en-us/library/windows/desktop/bb147243(v=vs.85).aspx
Basically if color_0 <= color_1 then there is a possibility the texture has 1-Bit alpha. To further verify it, you need to check the next 32-bits in 2-bit pairs if they are 11. Then continue this for every block if not found.
I agree with the accepted answer. Your job may be made a bit easier by using the "squish" library to decompress the blocks for you.
http://www.sjbrown.co.uk/?code=squish
DDS is a very poor wrapper for DXT (or BTC) data. The header will not help you.
Plain original DXT1 did not have any alpha. I believe d3d nowadays does actually decode DXT1 with alpha though. Every DXT1 block looks like this: color1(16 bits) color2(16 bits) indices(32 bits). If the 16 bit color1 value is less than color2 (just a uint16 comparison, nothing fancy!) the block has no alpha. Otherwise it does. So to answer you question: Skip the header, read 16 bits a, read 16 bits b, if a>b there is alpha. otherwise skip 32 bits and repeat until eof. Other DXT formats like DXT5 always have alpha. It is very rare that people rely on the DXT1 alpha trick because some hw (intel..) does not support it reliably.