Create a colour histogram from an image file - nim-lang

I'd like to use Nim to check the results of my Puppeteer test run executions.
Part of the end result is a screenshot. That screenshot should contain a certain amount of active colours. An active colour being orange, blue, red, or green. They indicate activity is present in the incoming data. Black, grey, and white need to be excluded, they only represent static data.
I haven't found a solution I can use yet.
import stb_image/read as stbi
var
w, h , c:int
data: seq[uint8]
cBin: array[256,int] #colour range was 0->255 afaict
data = stbi.load("screenshot.png",w,h,c,stbi.Default)
for d in data:
cBin[(int)d] = cBin[(int)d] + 1
echo cBin
Now I have a uint array, which I can see I can use to construct a histogram of the values, but I don't know how to map these to something like RGB values. Pointers anyone?
Is there a better package which has this automagically, I didn't spot one.

stbi.load() will return a sequence of interleaved uint8 color components. The number of interleaved components is determined either by c (i.e. channels_in_file) or desired_channels when it is non-zero.
For example, when channels_in_file == stbi.RGB and desired_channels == stbi.Default there are 3 interleaved components of red, green, and blue.
[
# r g b
255, 0, 0, # Pixel 1
0, 255, 0, # Pixel 2
0, 0, 255, # Pixel 3
]
You can process the above like:
import colors
for i in countUp(0, data.len - 3, step = stbi.RGB):
let
r = data[i + 0]
g = data[i + 1]
b = data[i + 2]
pixelColor = colors.rgb(r, g, b)
echo pixelColor
You can read more on this within comments for the stb_image.h.

Related

Calculate a colour in a linear gradient

I'd like to implement something like the powerpoint image below. A gradient that goes between three values.
It starts at A (-1), the mid point is B (0), and the end is C (1).
I have realised that I can save some effort by calculating the 'start' as a-to-b, and the 'end' as b-to-c. I can do as 2 sets of 2 gradients, instead of 1 gradient with three values.
But I'm stumped (despite googling) on how to get from one colour to another - ideally in the RGB colour space.
I'd like to be able to have something like this -
const colourSpace = (value, startColor, endColor) => {...}
colorSpace(-0.25, red, yellow) // some sort of orangey color
colorSpace(1, yellow, green) // fully green
colorSpace(0.8, yellow, green) // mostly green
This isn't a front-end application, so no CSS gradients - which is what google was mostly referencing.
Thanks all,
Ollie
If you aren't too worried about being perceptually consistent across the color space (you would need to work in something like LAB mode to do that), you can just take the linear interpolation in RGB space. Basically you take a distance (between 0 and 1), multiply it by the different in the coordinates, and add it to the first one. This will allow you to find arbitrary points (i.e colors) along the line between any two colors.
For example between red and yellow:
let canvas = document.getElementById('canvas')
var ctx = canvas.getContext('2d');
let rgb1 = [255, 0, 0] // red
let rgb2 = [255, 255, 0] // yellow
function getPoint(d, a1, a2) {
// find a color d% between a1 and a2
return a1.map((p, i) => Math.floor(a1[i] + d * (a2[i] - a1[i])))
}
// for demo purposes fill a canvas
for (let i = 0, j = 0; i < 1; i += .002, j++) {
let rgb = getPoint(i, rgb1, rgb2)
ctx.fillStyle = `rgba(${rgb.join(",")}, 1)`
ctx.fillRect(j, 0, 1, 200);
}
<canvas id="canvas" width="500"></canvas>
You can repeat this to get multiple 'stops' in the gradient.
I ended up using Chroma for converting between colour spaces.

How to make space for stitching multiple images in OpenCV - Python3 [duplicate]

I'm trying to stitch 2 images together by using template matching find 3 sets of points which I pass to cv2.getAffineTransform() get a warp matrix which I pass to cv2.warpAffine() into to align my images.
However when I join my images the majority of my affine'd image isn't shown. I've tried using different techniques to select points, changed the order or arguments etc. but I can only ever get a thin slither of the affine'd image to be shown.
Could somebody tell me whether my approach is a valid one and suggest where I might be making an error? Any guesses as to what could be causing the problem would be greatly appreciated. Thanks in advance.
This is the final result that I get. Here are the original images (1, 2) and the code that I use:
EDIT: Here's the results of the variable trans
array([[ 1.00768049e+00, -3.76690353e-17, -3.13824885e+00],
[ 4.84461775e-03, 1.30769231e+00, 9.61912797e+02]])
And here are the here the points passed to cv2.getAffineTransform: unified_pair1
array([[ 671., 1024.],
[ 15., 979.],
[ 15., 962.]], dtype=float32)
unified_pair2
array([[ 669., 45.],
[ 18., 13.],
[ 18., 0.]], dtype=float32)
import cv2
import numpy as np
def showimage(image, name="No name given"):
cv2.imshow(name, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
return
image_a = cv2.imread('image_a.png')
image_b = cv2.imread('image_b.png')
def get_roi(image):
roi = cv2.selectROI(image) # spacebar to confirm selection
cv2.waitKey(0)
cv2.destroyAllWindows()
crop = image_a[int(roi[1]):int(roi[1]+roi[3]), int(roi[0]):int(roi[0]+roi[2])]
return crop
temp_1 = get_roi(image_a)
temp_2 = get_roi(image_a)
temp_3 = get_roi(image_a)
def find_template(template, search_image_a, search_image_b):
ccnorm_im_a = cv2.matchTemplate(search_image_a, template, cv2.TM_CCORR_NORMED)
template_loc_a = np.where(ccnorm_im_a == ccnorm_im_a.max())
ccnorm_im_b = cv2.matchTemplate(search_image_b, template, cv2.TM_CCORR_NORMED)
template_loc_b = np.where(ccnorm_im_b == ccnorm_im_b.max())
return template_loc_a, template_loc_b
coord_a1, coord_b1 = find_template(temp_1, image_a, image_b)
coord_a2, coord_b2 = find_template(temp_2, image_a, image_b)
coord_a3, coord_b3 = find_template(temp_3, image_a, image_b)
def unnest_list(coords_list):
coords_list = [a[0] for a in coords_list]
return coords_list
coord_a1 = unnest_list(coord_a1)
coord_b1 = unnest_list(coord_b1)
coord_a2 = unnest_list(coord_a2)
coord_b2 = unnest_list(coord_b2)
coord_a3 = unnest_list(coord_a3)
coord_b3 = unnest_list(coord_b3)
def unify_coords(coords1,coords2,coords3):
unified = []
unified.extend([coords1, coords2, coords3])
return unified
# Create a 2 lists containing 3 pairs of coordinates
unified_pair1 = unify_coords(coord_a1, coord_a2, coord_a3)
unified_pair2 = unify_coords(coord_b1, coord_b2, coord_b3)
# Convert elements of lists to numpy arrays with data type float32
unified_pair1 = np.asarray(unified_pair1, dtype=np.float32)
unified_pair2 = np.asarray(unified_pair2, dtype=np.float32)
# Get result of the affine transformation
trans = cv2.getAffineTransform(unified_pair1, unified_pair2)
# Apply the affine transformation to original image
result = cv2.warpAffine(image_a, trans, (image_a.shape[1] + image_b.shape[1], image_a.shape[0]))
result[0:image_b.shape[0], image_b.shape[1]:] = image_b
showimage(result)
cv2.imwrite('result.png', result)
Sources: Approach based on advice received here, this tutorial and this example from the docs.
July 12 Edit:
This post inspired GitHub repos providing functions to accomplish this task; one for a padded warpAffine() and another for a padded warpPerspective(). Check out the Python version or the C++ version.
Transformations shift the location of pixels
What any transformation does is takes your point coordinates (x, y) and maps them to new locations (x', y'):
s*x' h1 h2 h3 x
s*y' = h4 h5 h6 * y
s h7 h8 1 1
where s is some scaling factor. You must divide the new coordinates by the scale factor to get back the proper pixel locations (x', y'). Technically, this is only true of homographies---(3, 3) transformation matrices---you don't need to scale for affine transformations (you don't even need to use homogeneous coordinates...but it's better to keep this discussion general).
Then the actual pixel values are moved to those new locations, and the color values are interpolated to fit the new pixel grid. So during this process, these new locations get recorded at some point. We'll need those locations to see where the pixels actually move to, relative to the other image. Let's start with an easy example and see where points are mapped.
Suppose your transformation matrix simply shifts pixels to the left by ten pixels. Translation is handled by the last column; the first row is the translation in x and second row is the translation in y. So we would have an identity matrix, but with -10 in the first row, third column. Where would the pixel (0,0) be mapped? Hopefully, (-10,0) if logic makes any sense. And in fact, it does:
transf = np.array([[1.,0.,-10.],[0.,1.,0.],[0.,0.,1.]])
homg_pt = np.array([0,0,1])
new_homg_pt = transf.dot(homg_pt))
new_homg_pt /= new_homg_pt[2]
# new_homg_pt = [-10. 0. 1.]
Perfect! So we can figure out where all points map with a little linear algebra. We will need to get all the (x,y) points, and put them into a huge array so that every single point is in it's own column. Lets pretend our image is only 4x4.
h, w = src.shape[:2] # 4, 4
indY, indX = np.indices((h,w)) # similar to meshgrid/mgrid
lin_homg_pts = np.stack((indX.ravel(), indY.ravel(), np.ones(indY.size)))
These lin_homg_pts have every homogenous point now:
[[ 0. 1. 2. 3. 0. 1. 2. 3. 0. 1. 2. 3. 0. 1. 2. 3.]
[ 0. 0. 0. 0. 1. 1. 1. 1. 2. 2. 2. 2. 3. 3. 3. 3.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]
Then we can do matrix multiplication to get the mapped value of every point. For simplicity, let's stick with the previous homography.
trans_lin_homg_pts = transf.dot(lin_homg_pts)
trans_lin_homg_pts /= trans_lin_homg_pts[2,:]
And now we have the transformed points:
[[-10. -9. -8. -7. -10. -9. -8. -7. -10. -9. -8. -7. -10. -9. -8. -7.]
[ 0. 0. 0. 0. 1. 1. 1. 1. 2. 2. 2. 2. 3. 3. 3. 3.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]
As we can see, everything is working as expected: we have shifted the x-values only, by -10.
Pixels can be shifted outside of your image bounds
Notice that these pixel locations are negative---they're outside of the image bounds. If we do something a little more complex and rotate the image by 45 degrees, we'll get some pixel values way outside our original bounds. We don't care about every pixel value though, we just need to know how far the farthest pixels are that are outside the original image pixel locations, so that we can pad the original image that far out, before displaying the warped image on it.
theta = 45*np.pi/180
transf = np.array([
[ np.cos(theta),np.sin(theta),0],
[-np.sin(theta),np.cos(theta),0],
[0.,0.,1.]])
print(transf)
trans_lin_homg_pts = transf.dot(lin_homg_pts)
minX = np.min(trans_lin_homg_pts[0,:])
minY = np.min(trans_lin_homg_pts[1,:])
maxX = np.max(trans_lin_homg_pts[0,:])
maxY = np.max(trans_lin_homg_pts[1,:])
# minX: 0.0, minY: -2.12132034356, maxX: 4.24264068712, maxY: 2.12132034356,
So we see that we can get pixel locations well outside our original image, both in the negative and positive directions. The minimum x value doesn't change because when an homography applies a rotation, it does it from the top-left corner. Now one thing to note here is that I've applied the transformation to all pixels in the image. But this is really unnecessary, you can simply warp the four corner points and see where they land.
Padding the destination image
Note that when you call cv2.warpAffine() you have to input the destination size. These transformed pixel values reference that size. So if a pixel gets mapped to (-10,0), it won't show up in the destination image. That means that we'll have to make another homography with translations which shift all pixel locations be positive, and then we can pad the image matrix to compensate for our shift. We'll also have to pad the original image on the bottom and the right if the homography moves points to positions bigger than the image, too.
In the recent example, the min x value is the same, so we need no horizontal shift. However, the min y value has dropped by about two pixels, so we need to shift the image two pixels down. First, let's create the padded destination image.
pad_sz = list(src.shape) # in case three channel
pad_sz[0] = np.round(np.maximum(pad_sz[0], maxY) - np.minimum(0, minY)).astype(int)
pad_sz[1] = np.round(np.maximum(pad_sz[1], maxX) - np.minimum(0, minX)).astype(int)
dst_pad = np.zeros(pad_sz, dtype=np.uint8)
# pad_sz = [6, 4, 3]
As we can see, the height increased from the original by two pixels to account for that shift.
Add translation to the transformation to shift all pixel locations to positive
Now, we need to create a new homography matrix to translate the warped image by the same amount that we shifted by. And to apply both transformations---the original and this new shift---we have to compose the two homographies (for an affine transformation, you can simply add the translation, but not for an homography). Additionally we need to divide by the last entry to make sure the scales are still proper (again, only for homographies):
anchorX, anchorY = 0, 0
transl_transf = np.eye(3,3)
if minX < 0:
anchorX = np.round(-minX).astype(int)
transl_transf[0,2] -= anchorX
if minY < 0:
anchorY = np.round(-minY).astype(int)
transl_transf[1,2] -= anchorY
new_transf = transl_transf.dot(transf)
new_transf /= new_transf[2,2]
I also created here the anchor points for where we will place the destination image into the padded matrix; it's shifted by the same amount the homography will shift the image. So let's place the destination image inside the padded matrix:
dst_pad[anchorY:anchorY+dst_sz[0], anchorX:anchorX+dst_sz[1]] = dst
Warp with the new transformation into the padded image
All we have left to do is apply the new transformation to the source image (with the padded destination size), and then we can overlay the two images.
warped = cv2.warpPerspective(src, new_transf, (pad_sz[1],pad_sz[0]))
alpha = 0.3
beta = 1 - alpha
blended = cv2.addWeighted(warped, alpha, dst_pad, beta, 1.0)
Putting it all together
Let's create a function for this since we were creating quite a few variables we don't need at the end here. For inputs we need the source image, the destination image, and the original homography. And for outputs we simply want the padded destination image, and the warped image. Note that in the examples we used a 3x3 homography so we better make sure we send in 3x3 transforms instead of 2x3 affine or Euclidean warps. You can just add the row [0,0,1] to any affine warp at the bottom and you'll be fine.
def warpPerspectivePadded(img, dst, transf):
src_h, src_w = src.shape[:2]
lin_homg_pts = np.array([[0, src_w, src_w, 0], [0, 0, src_h, src_h], [1, 1, 1, 1]])
trans_lin_homg_pts = transf.dot(lin_homg_pts)
trans_lin_homg_pts /= trans_lin_homg_pts[2,:]
minX = np.min(trans_lin_homg_pts[0,:])
minY = np.min(trans_lin_homg_pts[1,:])
maxX = np.max(trans_lin_homg_pts[0,:])
maxY = np.max(trans_lin_homg_pts[1,:])
# calculate the needed padding and create a blank image to place dst within
dst_sz = list(dst.shape)
pad_sz = dst_sz.copy() # to get the same number of channels
pad_sz[0] = np.round(np.maximum(dst_sz[0], maxY) - np.minimum(0, minY)).astype(int)
pad_sz[1] = np.round(np.maximum(dst_sz[1], maxX) - np.minimum(0, minX)).astype(int)
dst_pad = np.zeros(pad_sz, dtype=np.uint8)
# add translation to the transformation matrix to shift to positive values
anchorX, anchorY = 0, 0
transl_transf = np.eye(3,3)
if minX < 0:
anchorX = np.round(-minX).astype(int)
transl_transf[0,2] += anchorX
if minY < 0:
anchorY = np.round(-minY).astype(int)
transl_transf[1,2] += anchorY
new_transf = transl_transf.dot(transf)
new_transf /= new_transf[2,2]
dst_pad[anchorY:anchorY+dst_sz[0], anchorX:anchorX+dst_sz[1]] = dst
warped = cv2.warpPerspective(src, new_transf, (pad_sz[1],pad_sz[0]))
return dst_pad, warped
Example of running the function
Finally, we can call this function with some real images and homographies and see how it pans out. I'll borrow the example from LearnOpenCV:
src = cv2.imread('book2.jpg')
pts_src = np.array([[141, 131], [480, 159], [493, 630],[64, 601]], dtype=np.float32)
dst = cv2.imread('book1.jpg')
pts_dst = np.array([[318, 256],[534, 372],[316, 670],[73, 473]], dtype=np.float32)
transf = cv2.getPerspectiveTransform(pts_src, pts_dst)
dst_pad, warped = warpPerspectivePadded(src, dst, transf)
alpha = 0.5
beta = 1 - alpha
blended = cv2.addWeighted(warped, alpha, dst_pad, beta, 1.0)
cv2.imshow("Blended Warped Image", blended)
cv2.waitKey(0)
And we end up with this padded warped image:
![[Padded and warped1]1
as opposed to the typical cut off warp you would normally get.

2 bit per pixel tga's color to qRgb

I need to read tga's with pyqt and so far this seems to be working fine except where a tga has 2 bytes per pixel as opposed to 3 or 4. My code is taken from here http://pastebin.com/b5Vz61dZ.
Specifically this section:
def getPixel( file, bytesPerPixel):
'Given the file object f, and number of bytes per pixel, read in the next pixel and return a qRgba uint'
pixel = []
for i in range(bytesPerPixel):
pixel.append(ord(file.read(1)))
if bytesPerPixel==4:
pixel = [pixel[2], pixel[1], pixel[0], pixel[3]]
color = qRgba(*pixel)
elif bytesPerPixel == 3:
pixel = [pixel[2], pixel[1], pixel[0]]
color = qRgb(*pixel)
elif bytesPerPixel == 2:
# if greyscale
color = QColor.fromHsv( 0, pixel[0] , pixel[1])
color = color.value()
return color
and this part:
elif bytesPerPixel == 2:
# if greyscale
color = QColor.fromHsv( 0, pixel[0] , pixel[1])
color = color.value()
how would I input the pixel[0] and pixel[1] values to create get the values in the correct format and colorspace?
Any thoughts, ideas or help please!!!
pixel = [ pixel[1]*2 , pixel[1]*2 , pixel[1]*2 ]
color = qRgb(*pixel)
works for me. Correct luminance and all. Though I'm not sure doubling the pixel[1] value would work for all instances.
Thank you for all the help istepura :)
http://lists.xcf.berkeley.edu/lists/gimp-developer/2000-August/013021.html
"Pixels are stored in little-endian order, in BGR555 format."
So, you have to take "leftmost" 5 bits of pixel[1] as Blue, rest 3 bits + 2 "leftmost" bits of pixel[0] would be Green, and next 5 bits of pixel[0] would be Red.
In your case, I suppose, the code should be something like:
pixel = [(pixel[1]&0xF8)>>3, ((pixel[1]&0x7)<<2)|((pixel[0]&0xC0)>>6), (pixel[0]&0x3E)>>1)
color = qRgb(*pixel)

How to generate a set of random colors where no two colors are almost similar?

I currently use the following function to generate a random hexadecimal representation of a color.
function getRandomColor($max_r = 192, $max_g = 192, $max_b = 192) {
if ($max_r > 192) { $max_r = 192; }
if ($max_g > 192) { $max_g = 192; }
if ($max_b > 192) { $max_b = 192; }
if ($max_r < 0) { $max_r = 0; }
if ($max_g < 0) { $max_g = 0; }
if ($max_b < 0) { $max_b = 0; }
return '#' . dechex(rand(0, 192)) . dechex(rand(0, 192)) . dechex(rand(0, 192));
}
Notice that I set the max value to be 192 instead of 255 for the sole reason that I am avoiding very light colors, for the purpose that I would be using the random color as foreground in a white background.
My question is how do I generate an indefinitely numbered set of colors where there are no colors that are almost the same. e.g.: #D964D9 & #FF3EFF ?
It might be better to use HSV coordinates. If you don't need white or black, you can set S and V to their maximum values, and generate H values that are not too close to each other (mod 360 degrees). Then convert to RGB.
There are several methods which spring to mind:
Set up a array of n standard colors and interchange them randomly to produce the desired "random" colors.
Fill an array of n colors; generate a random color and check if there is something "close" already in the array. If so, choose another random color.
Select each color as a deterministic sequence, like a simple hash value, designed to not produce duplicate values. Grey code springs to mind.
Your algorithm could randomly generate RGB colors (as it's doing now) however you could for example verify that the two R's are sufficiently different before accepting the color choice. The algorithm could repeat that step (say up to 4...10...N times) for a given R, G and/or B.
while ( (R1 > $max_r/2) && (R2 > $max_r/2) ) {
// Both are in the upper half of range, get a new random value for R1.
}
Other possibilities:
Repeat for the lower half of range
Further sub-divide ranges (into 1/3's or 1/4's)
Repeat for G and B tones

Programmatically darken a Hex colour [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
What's the easiest way to programmatically darken a hex colour?
If you're not bothered about too much control, and just want a generally darker version of a colour, then:
col = (col & 0xfefefe) >> 1;
Is a nice quick way to halve a colour value (assuming it's packed as a byte per channel, obviously).
In the same way brighter would be:
col = (col & 0x7f7f7f) << 1;
Convert hex color into integer RBG components:
#FF6600 = rbg(255, 102, 0)
If you want to make it darker by 5%, then simply reduce all integer values by 5%:
255 - 5% = 242
102 - 5% = 96
0 - 5% = 0
= rbg(242, 96, 0)
Convert back to hex color
= #F26000
A function implemented in javascript:
// credits: richard maloney 2006
function getTintedColor(color, v) {
if (color.length >6) { color= color.substring(1,color.length)}
var rgb = parseInt(color, 16);
var r = Math.abs(((rgb >> 16) & 0xFF)+v); if (r>255) r=r-(r-255);
var g = Math.abs(((rgb >> 8) & 0xFF)+v); if (g>255) g=g-(g-255);
var b = Math.abs((rgb & 0xFF)+v); if (b>255) b=b-(b-255);
r = Number(r < 0 || isNaN(r)) ? 0 : ((r > 255) ? 255 : r).toString(16);
if (r.length == 1) r = '0' + r;
g = Number(g < 0 || isNaN(g)) ? 0 : ((g > 255) ? 255 : g).toString(16);
if (g.length == 1) g = '0' + g;
b = Number(b < 0 || isNaN(b)) ? 0 : ((b > 255) ? 255 : b).toString(16);
if (b.length == 1) b = '0' + b;
return "#" + r + g + b;
}
Example:
> getTintedColor("ABCEDEF", 10)
> #c6f7f9
Well, I don't have any pseudocode for you, but a tip. If you want to darken a color and maintain its hue, you should convert that hex to HSB (hue, saturation, brightness) rather than RGB. This way, you can adjust the brightness and it will still look like the same color without hue shifting. You can then convert that HSB back to hex.
given arg darken_factor # a number from 0 to 1, 0=no change, 1=black
for each byte in rgb_value
byte = byte * (1 - darken_factor)
I pieced together a nice two-liner function for this:
Programmatically Lighten or Darken a hex color (or rgb, and blend colors)
shadeColor2(hexcolor,-0.05) for 5% darker
shadeColor2(hexcolor,-0.25) for 25% darker
Use positives for lightening.
Split the hex color into its RGB components.
Convert each of these components into an integer value.
Multiply that integer by a fraction, such as 0.5, making sure the result is also integer.
Alternatively, subtract a set amount from that integer, being sure not to go below 0.
Convert the result back to hex.
Concatenate these values in RGB order, and use.
RGB colors (in hexadecimal RGB notation) get darker or lighter by adjusting shade, key, lightness, or brightness. See the playground: colorizer.org
Option 1. Translate R, G, B values to darken shade
This one is simple, but easy to mess up. Here is subtracting 16 points off the (0,255) scale from each value:
myHex = 0x8c36a9;
darkerHex = myHex - 0x101010;
# 0x7c2699;
The hex will underflow if any of the R,G,B values are 0x0f or lower. Something like this would fix that.
myHex = 0x87f609;
darkenBy = 0x10;
floor = 0x0;
darkerHex = (max((myHex >> 16) - darkenBy, floor) << 16) + \
(max(((myHex & 0xff00) >> 8) - darkenBy, floor) << 8) + \
max(((myHex & 0xff) - darkenBy), floor);
# 0x77e600
# substitute `ceiling=0xff;` and `min((myHex ...) + lightenBy, ceiling)` for lightening
Option 2. Scale R, G, B values to increase black
In the CMYK model, key (black) is 1 - max of R, G, B values on (0,1) scale.
This one is simple enough that you can get good results without too much code. You're rescaling the distribution of R, G, B values by a single scaling factor.
Express the scaling factor as 2-digit hex (so 50% would be .5*0x100 or 0x80, 1/16th is 0x10 and 10% rounds down to 0x19 ).
# Assumes integer division ... looking at you python3 >:(
myHex = 0x8c36a9;
keyFactor = 0x10; # Lighten or darken by 6.25%
R = myHex >> 16; # 0x8c
G = (myHex & 0xff00) >> 8; # 0x36
B = myHex & 0xff; # 0xa9
darkerHex = ((R-R*keyFactor/0x100) << 16) + # Darker R
((G-G*keyFactor/0x100) << 8) + # Darker G
(B-B*keyFactor/0x100); # Darker B
# 0x84339f
# substitute `(X+keyFactor-X*keyFactor/0x100)` for lightening
# 0x9443af
Option 3. Reduce Lightness or Brightness at constant hue
In the HSL representation of RGB, lightness is the midpoint between min and max of R, G, B values. For HSV, brightness is the max of R, G, B values.
Consider using your language's built-in or external RGB/HEX to HSL/HSV converter. Then adjust your L/V values and convert back to RGB/HSL. You can do the conversion by hand, as in #1 & #2, but the implementation may not save you any time over an existing converter (see links for the maths).
You should consider darken the color in L*a*b* color space. Here's an example in JavaScript using chroma.js:
chroma.hex("#FCFC00").darker(10).hex() // "#dde000"
A hex colour such as #FCFCFC consists of three pairs representing RGB. The second part of each pair can be reduced to darken any colour without altering the colour considerably.
eg. to darken #FCFCFC, lower the values of C to give #F0F0F0
Reducing the first part of each pair by a small amount will also darken the colour, but you will start to affect the colour more (eg. turning a green to a blue).

Resources