Invalid parameters in CGBitmapContext constructor - xamarin.ios

I'm trying to create a blank bitmap in a Xamarin.iOS project using the CGBitmapContext constructor(s). However, no matter what I try I just get the error "System.Exception: Invalid parameters to context creation"
Example code:
const int width = 100;
const int height = 100;
const int bytesPerPixel = 4;
const int bitsPerComponent = 8;
var intBytes = width * height * bytesPerPixel;
byte[] pixelData = new byte[intBytes];
var colourSpace = CGColorSpace.CreateDeviceRGB();
using (var objContext = new CGBitmapContext(pixelData, width, height, bitsPerComponent, width * bytesPerPixel, colourSpace, CGBitmapFlags.ByteOrderDefault))
{
// ...
}
I have tried changing most of the parameters; I tried fixing the data block and passing it as an IntPtr. I tried using null as the first parameter so that the system would allocate the data. And I've tried various flags in the last parameter. I always get that same error. What parameter is wrong? And what needs to be changed in the code above to make it execute?

I changed last parameter from CGBitmapFlags.ByteOrderDefault to CGBitmapFlags.PremultipliedLast , and the error disappear.
refer to CGBitmapFlags Enumeration
As the link said
This enumeration specifies the layout information for the component data in a bitmap.
This enumeration determines the in-memory organization of the data and includes the color model, whether there is an alpha channel present and whether the component values have been premultiplied.
I think we have to select the corresponding flag to match the data info, especially the color model.
For the same reason, If you choose CreateDeviceCmyk , CGBitmapFlags.None will be appropriate.

Related

Custom filter bank is not generating the expected output

Please, refer to this article.
I have implemented the section 4.1 (Pre-processing).
The preprocessing step aims to enhance image features along a set of
chosen directions. First, image is grey-scaled and filtered with a
sharpening filter (we subtract from the image its local-mean filtered
version), thus eliminating the DC component.
We selected 12 not overlapping filters, to analyze 12 different
directions, rotated with respect to 15° each other.
GitHub Repositiry is here.
Since, the given formula in the article is incorrect, I have tried two sets of different formulas.
The first set of formula,
The second set of formula,
The expected output should be,
Neither of them are giving proper results.
Can anyone suggest me any modification?
GitHub Repository is here.
Most relevalt part of the source code is here:
public List<Bitmap> Apply(Bitmap bitmap)
{
Kernels = new List<KassWitkinKernel>();
double degrees = FilterAngle;
KassWitkinKernel kernel;
for (int i = 0; i < NoOfFilters; i++)
{
kernel = new KassWitkinKernel();
kernel.Width = KernelDimension;
kernel.Height = KernelDimension;
kernel.CenterX = (kernel.Width) / 2;
kernel.CenterY = (kernel.Height) / 2;
kernel.Du = 2;
kernel.Dv = 2;
kernel.ThetaInRadian = Tools.DegreeToRadian(degrees);
kernel.Compute();
//SleuthEye
kernel.Pad(kernel.Width, kernel.Height, WidthWithPadding, HeightWithPadding);
Kernels.Add(kernel);
degrees += degrees;
}
List<Bitmap> list = new List<Bitmap>();
Bitmap image = (Bitmap)bitmap.Clone();
//PictureBoxForm f = new PictureBoxForm(image);
//f.ShowDialog();
Complex[,] cImagePadded = ImageDataConverter.ToComplex(image);
Complex[,] fftImage = FourierTransform.ForwardFFT(cImagePadded);
foreach (KassWitkinKernel k in Kernels)
{
Complex[,] cKernelPadded = k.ToComplexPadded();
Complex[,] convolved = Convolution.ConvolveInFrequencyDomain(fftImage, cKernelPadded);
Bitmap temp = ImageDataConverter.ToBitmap(convolved);
list.Add(temp);
}
return list;
}
Perhaps the first thing that should be mentioned is that the filters should be generated with angles which should increase in FilterAngle (in your case 15 degrees) increments. This can be accomplished by modifying KassWitkinFilterBank.Apply as follow (see this commit):
public List<Bitmap> Apply(Bitmap bitmap)
{
// ...
// The generated template filter from the equations gives a line at 45 degrees.
// To get the filter to highlight lines starting with an angle of 90 degrees
// we should start with an additional 45 degrees offset.
double degrees = 45;
KassWitkinKernel kernel;
for (int i = 0; i < NoOfFilters; i++)
{
// ... setup filter (unchanged)
// Now increment the angle by FilterAngle
// (not "+= degrees" which doubles the value at each step)
degrees += FilterAngle;
}
This should give you the following result:
It is not quite the result from the paper and the differences between the images are still quite subtle, but you should be able to notice that the scratch line is most intense in the 8th figure (as would be expected since the scratch angle is approximately 100-105 degrees).
To improve the result, we should feed the filters with a pre-processed image in the same way as described in the paper:
First, image is grey-scaled and filtered with a sharpening filter (we subtract from the image its local-mean filtered version), thus eliminating the DC component
When you do so, you will get a matrix of values, some of which will be negative. As a result this intermediate processing result is not suitable to be stored as a Bitmap. As a general rule when performing image processing, you should keep all intermediate results in double or Complex as appropriate, and only convert back the final result to Bitmap for visualization.
Integrating your changes to add image sharpening from your GitHub repository while keeping intermediate results as doubles can be achieve by changing the input bitmap and temporary image variables to use double[,] datatype instead of Bitmap in the KassWitkinFilterBank.Apply method (see this commit):
public List<Bitmap> Apply(double[,] bitmap)
{
// [...]
double[,] image = (double[,])bitmap.Clone();
// [...]
}
which should give you the following result:
Or to better highlight the difference, here is figure 1 (0 degrees) on the left, next to figure 8 (105 degrees) on the right:

How to get position/off-set of an element in android with respect to parent element?

In my project I have a layout like:
<FrameLayout> ---------(1)
<SurfaceView>
<FrameLayout> ------(2)
</SurfaceView>
</FrameLayout>
I want to access position/offset of frame-layout (2) with respect to frame-layout (1).
When I try FrameLayout1.getLeft() and FrameLayout1.getTop(), I get it as zero.
What is the correct way to get the (left,top) of nested elements?
Thank you!
Update--->
int dpi = getResources().getDisplayMetrics().densityDpi;
int offset1 = childFrameLayoutPosition[0] - rootFrameLayoutPosition[0]; //x offset
int offset2 = childFrameLayoutPosition[1] - rootFrameLayoutPosition[1]; //y offset
//F1 is child FrameLayout
newBitmap = Bitmap.createBitmap(bitmap,offset1*w, offset2*w,F1.getWidth()*w,F1.getHeight()*w);
After doing this, I got error that Width and Height parameters should be less than width and height of bitmap.
int[] rootFrameLayoutPosition = new int[2];
int[] childFrameLayoutPosition = new int[2];
rootFrameLayout.getLocationInWindow(rootFrameLayoutPosition);
childFrameLayout.getLocationInWindow(childFrameLayoutPosition);
int offset = childFrameLayoutPosition[1] - rootFrameLayoutPosition[1];
maybe I'm confused x and y

How can i store and access images in Mat of opencv

I am trying to use:
cv::Mat source;
const int histSize[] = {intialframes, initialWidth, initialHeight};
source.create(3, histSize, CV_8U);
for saving multiple images in one matrix. However when i do so, it gives me dims = 3 and -1 in rows and cols.
Is it correct?
If not what is the bug in it?
if yes how can I access my images one by one?
Reading the documentation of the class cv::Mat ->doc
You can see that cv::Mat.rows and cv::Mat.cols are the number of rows and cols in a 2D array -1 otherwise.
With source.create(3, histSize, CV_8U); you are creating a 3D array.
In the cv::Mat doc is written how to access the elements.
With the create method the matrix is continuos and in a plane-by-plane organized fashion.
EDIT
The first part of text in the documentation after the code of the class definition tells you how to access each element of the matrix using the step[] parameter of the matrix:
If you want to access the pixel (u, v) of the image i you need to get a pointer to the data and use pointer's arithmetic to reach the desired pixel:
int sizes[] = { 10, 200, 100 };
cv::Mat M(3, sizes, CV_8UC1);
//get a pointer to the pixel
uchar *px = M.data + M.step[0] * i + M.step[1] * u + M.step[2] * v;
//get the pixel intensity
uchar intensity = *px;

Setting Column width in Apache POI

I am writing a tool in Java using Apache POI API to convert an XML to MS Excel. In my XML input, I receive the column width in points. But the Apache POI API has a slightly queer logic for setting column width based on font size etc. (refer API docs)
Is there a formula for converting points to the width as expected by Excel? Has anyone done this before?
There is a setRowHeightInPoints() method though :( but none for column.
P.S.: The input XML is in ExcelML format which I have to convert to MS Excel.
Unfortunately there is only the function setColumnWidth(int columnIndex,
int width) from class Sheet; in which width is a number of characters in the standard font (first font in the workbook) if your fonts are changing you cannot use it.
There is explained how to calculate the width in function of a font size. The formula is:
width = Truncate([{NumOfVisibleChar} * {MaxDigitWidth} + {5PixelPadding}] / {MaxDigitWidth}*256) / 256
You can always use autoSizeColumn(int column, boolean useMergedCells) after inputting the data in your Sheet.
Please be carefull with the usage of autoSizeColumn(). It can be used without problems on small files but please take care that the method is called only once (at the end) for each column and not called inside a loop which would make no sense.
Please avoid using autoSizeColumn() on large Excel files. The method generates a performance problem.
We used it on a 110k rows/11 columns file. The method took ~6m to autosize all columns.
For more details have a look at: How to speed up autosizing columns in apache POI?
You can use also util methods mentioned in this blog: Getting cell witdth and height from excel with Apache POI. It can solve your problem.
Copy & paste from that blog:
static public class PixelUtil {
public static final short EXCEL_COLUMN_WIDTH_FACTOR = 256;
public static final short EXCEL_ROW_HEIGHT_FACTOR = 20;
public static final int UNIT_OFFSET_LENGTH = 7;
public static final int[] UNIT_OFFSET_MAP = new int[] { 0, 36, 73, 109, 146, 182, 219 };
public static short pixel2WidthUnits(int pxs) {
short widthUnits = (short) (EXCEL_COLUMN_WIDTH_FACTOR * (pxs / UNIT_OFFSET_LENGTH));
widthUnits += UNIT_OFFSET_MAP[(pxs % UNIT_OFFSET_LENGTH)];
return widthUnits;
}
public static int widthUnits2Pixel(short widthUnits) {
int pixels = (widthUnits / EXCEL_COLUMN_WIDTH_FACTOR) * UNIT_OFFSET_LENGTH;
int offsetWidthUnits = widthUnits % EXCEL_COLUMN_WIDTH_FACTOR;
pixels += Math.floor((float) offsetWidthUnits / ((float) EXCEL_COLUMN_WIDTH_FACTOR / UNIT_OFFSET_LENGTH));
return pixels;
}
public static int heightUnits2Pixel(short heightUnits) {
int pixels = (heightUnits / EXCEL_ROW_HEIGHT_FACTOR);
int offsetWidthUnits = heightUnits % EXCEL_ROW_HEIGHT_FACTOR;
pixels += Math.floor((float) offsetWidthUnits / ((float) EXCEL_ROW_HEIGHT_FACTOR / UNIT_OFFSET_LENGTH));
return pixels;
}
}
So when you want to get cell width and height you can use this to get value in pixel, values are approximately.
PixelUtil.heightUnits2Pixel((short) row.getHeight())
PixelUtil.widthUnits2Pixel((short) sh.getColumnWidth(columnIndex));
With Scala there is a nice Wrapper spoiwo
You can do it like this:
Workbook(mySheet.withColumns(
Column(autoSized = true),
Column(width = new Width(100, WidthUnit.Character)),
Column(width = new Width(100, WidthUnit.Character)))
)
I answered my problem with a default width for all columns and cells, like below:
int width = 15; // Where width is number of caracters
sheet.setDefaultColumnWidth(width);

Programmatically measure text string in pixels for Silverlight

In WPF there is the FormattedText in the System.Windows.Media namespace MSDN FormattedText that I can use like so:
private static Size GetTextSize(string txt, string font, int size, bool isBold)
{
Typeface tf = new Typeface(new System.Windows.Media.FontFamily(font),
FontStyles.Normal,
(isBold) ? FontWeights.Bold : FontWeights.Normal,
FontStretches.Normal);
FormattedText ft = new FormattedText(txt, new CultureInfo("en-us"), System.Windows.FlowDirection.LeftToRight, tf, (double)size, System.Windows.Media.Brushes.Black, null, TextFormattingMode.Display);
return new Size { Width = ft.WidthIncludingTrailingWhitespace, Height = ft.Height };
}
Is there a good approach in Silverlight to getting the width in pixels (at the moment height isn't important) besides making a call to the server?
An approach that I've seen used, that may not work in your particular instance, is to throw the text into an unstyled TextBlock, and then get the width of that control, like so:
private double GetTextWidth(string text, int fontSize)
{
TextBlock txtMeasure = new TextBlock();
txtMeasure.FontSize = fontSize;
txtMeasure.Text = text;
double width = txtMeasure.ActualWidth;
return width;
}
It's a hack, no doubt.

Resources