In the GLSL spec, and other sources about GLSL, sampler types are available in 3 dimensions: sampler1D, sampler2D, and sampler3D.
However when I try to compile GLSL programs using WebGL in Chrome (both regular, and also in Canary), sampler2D and sampler3D are accepted but sampler1D gives a syntax error. Code:
uniform sampler1D tex1;
Error:
FS ERROR: ERROR: 0:9: 'sampler1D' : syntax error
This error occurs even if I give Canary the command line argument --use-gl=desktop.
I am running Chrome 12.0.742.68 beta-m, and Canary 13.0.782.1.
The chipset I have is Nvidia Quadro NVS 160M.
Is it possible that Nvidia allows 2- and 3-dimensional texture samplers, but not 1D? I've tried searching for information to that effect, but have not found anything.
No, your problem isn't related to "NVIDIA GLSL". WebGL is based on OpenGL ES 2.0, and OpenGL ES 2.0 doesn't have 1D textures, only 2D and 3D textures (as extensions), so you won't be able to use a sampler1D in WebGL.
Solution? Just use a 2D texture with a height of 1 with a sampler2D.
Note: If you use Desktop OpenGL (OpenGL >= 2.0), you will be able to use 1D textures and sampler1D's.
An example of using a a OpenGL texture 2D object with a height of 1:
glTexStorage2D(GL_TEXTURE_2D, 8, GL_RGB8, 256, 1);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 256, 1, GL_RGB, GL_UNSIGNED_BYTE, palette);
And the corresponding call in GLSL, using a sampler2D object named "tex":
vec4 color = texture(tex, vec2(x, 1.0f));\n"
Related
I am using Pandas3D 1.10 (with Python 3.6) and I am trying to generate terrain on the fly.
Currently, I was able to perform this:
Now, my idea is to add textures to this terrain. I plan to add different textures for different kinds of ground and biome, but when I try to add a texture, this is added on the whole terrain.
I only want to add the texture to certain parts of the mesh, so I can combine different textures (dirt, grass, sand, etc) and make a better terrain.
From this Panda3D documentation you can see an example of how to make the terrain:
from panda3d.core import ShaderTerrainMesh, Shader, load_prc_file_data
# Required for matrix calculations
load_prc_file_data("", "gl-coordinate-system default")
# ...
terrain_node = ShaderTerrainMesh()
terrain_node.heightfield_filename = "heightfield.png" # This must be in gray scale, also you can make an image with PNMImage() with code.
terrain_node.target_triangle_width = 10.0
terrain_node.generate()
terrain_np = render.attach_new_node(terrain_node)
terrain_np.set_scale(1024, 1024, 60)
terrain_np.set_shader(Shader.load(Shader.SL_GLSL, "terrain.vert", "terrain.frag"))
In that link, there is also an example of both terrain.vert and terrain.frag.
I tried to apply this guide but it seem that doesn't work on ShaderMeshTerrain, or I think.
ts = TextureStage('ts')
myTexture = loader.loadTexture("textures/Grass.png")
terrain_np.setTexture(ts, myTexture)
terrain_np.setTexScale(ts, 10, 10)
terrain_np.setTexOffset(ts, -25, -25)
The output is the same. It doesn't matter how much I change the numbers of setTextScale and setTexOffset the output is always all with grass.
How can I only implement the texture on a certain part of the model?
Obviously, I can make the image on the fly and do all the modifications with PNMImage(), but it will be slow and difficult, and I am very sure it may be possible to do without re-made the texture each time.
EDIT
I've discovered that I can do this in order to only put a texture in a place:
ts = TextureStage('ts')
myTexture = loader.loadTexture("textures/Grass.png")
myTexture.setWrapU(Texture.WM_border_color)
myTexture.setWrapV(Texture.WM_border_color)
myTexture.setBorderColor(VBase4(0, 0, 0, 0))
terrain_np.setTexture(ts, myTexture)
The problem is that I am not able to change the location of this texture nor its size. Also, note that I don't want to reduce the scale of the texture, when I want to make a texture smaller I mean "cut" or "erase" all the parts that don't fit on the place, not reduce the overall texture size.
Sadly these commands aren't working:
myTexture.setTexScale(ts, LVecBase2(0.5, 250))
myTexture.setTexOffset(ts, LVecBase2(0.15, 0.5))
I am creating a game in HaxeFlixel, using flixel-ui to deal with the user interface. I have run into a problem using the FlxUI9SliceSprite. I have the following line of code to construct it:
_bg = new FlxUI9SliceSprite(0, 0, "assets/images/panel_bg.png", new Rectangle(0, 0, 280, 50), [8, 8, 16, 16]);
However, this does not work. I believe the problem is with the Graphic parameter "assets/images/panel_bg.png", as using null (which causes it to use a default graphic) works just fine.
When putting a try-catch around it, I got the following error message:
ArgumentError: Error #2015
I'm the maintainer of the flixel-UI library. The error you're experiencing is "Invalid Bitmap Data", which could be caused by any number of things. There's two possibilities that come to mind:
1) Your asset path is wrong, or your asset isn't being found for some reason.
2) Your asset is being loaded, but the 9-slice rules you're submitting result in it doing an "illegal" transformation that results in pieces of it being invalid Bitmap Data (like, say, a section where the math works out that the width or height of the piece is 0 or negative)
Number 1 is unlikely as that would probably just default to a null bitmap and it would just fall back to the default asset.
The easiest way to resolve this is if you could post a sample of the image asset you're using and link to it, then I could inspect what the 9-slice logic you supplied would do to it and narrow down your issue.
I would like to generate PNG images with 1bits (2 colors) or 2bit (4 colors) depth with the library Libpng.
Does any one know how to do it ? I have tested examples, and they all seem to work with 8bit color depth ?
I know png_set_IHDR but in the example I test, when I change the depth parameter in png_set_IHDR from 8 to 2 or 1, my program draw one pixels of 2 or 4. I think, it's due to the memory allocation created with the png_malloc function.
In the example I try to modify (http://www.lemoda.net/c/write-png/), the png_malloc function allocates all pixels of the image with sizeof uint8_t.
png_malloc (png_ptr, sizeof (uint8_t) * bitmap->width * pixel_size);
Can you tell to me how to allocate 1bit or 2bits pixels ?
Thank's
Jo2s
In png_set_IHDR, the bit_depth parameter sets the bit depth of all color components. Together with PNG_COLOR_TYPE_RGB, you end up with one bit for the red component, one for the green and one for the blue.
You should call png_set_IHDR with PNG_COLOR_TYPE_PALETTE, and then if bit_depth is one, you must have a palette with two colors (0 and 1), and if bit_depth is 2, one with four colors (0 to 3.)
I want to mask the moving objects from video.
I found that OpenCV has some built-in BackgroundSubtractors which could possibly saving my time a lot. However, according to the official reference, the function:
void BackgroundSubtractorMOG2::operator()(InputArray image, OutputArray fgmask, double learningRate=-1)
should output a mask, fgmask, but it doesn't. The fgmask variable will contain the "contour of the mask" instead after invoking above method. That's weird. All I want is a simple closed region filled with white color(for example) to represent the moving objects. How could I do that?
Any reply or recommendation would be very appreciate. Thanks a lot.
Here's my code:
int main(int argc, char *argv[])
{
cv::BackgroundSubtractorMOG2 bg = BackgroundSubtractorMOG2(30,16.0,false);
cv::VideoCapture cap(0);
cv::Mat frame, mask, _frame, _fmask;
cvNamedWindow("mask", CV_WINDOW_AUTOSIZE);
for(;;)
{
cap >> frame;
bg(frame,fmask,-1);
_frame = IplImage(frame);
_fmask = IplImage(fmask);
cvShowImage("mask", &_fmask);
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
A snapshot of the output video is:
p.s. My working environment is OpenCV2.4.3 on OSX 10.8 and XCode 4.5.2 with apple LLVM compiler 4.1.
If you want to acquire the whole objects filled with white pixels in the foreground then I would ask you to tell me something about your experience.
My question is, for the code, you mentioned above, do you get more white pixels when you generate more motion in front of your camera?
If yes then there are two paramenters to learn about for your requirement.
First is the History parameter. which you have configured as 30 in the constructor BackgroundSubtractorMOG2(30,16.0,false);. You can test this param by incresing, say to 300. It will maintain the motion history of the object in the foreground. So if you have moved completely from your starting location within the 300 frames then you will get whole object covered with white pixels as you want. but it will be erased gradually. So it cannot give you the 100% solution.
The second parameter is called learning rate. In the code you mentioned bg(frame,fmask,-1); where -1 is your learning rate. you can set it to 0.0 to 1.0 and default is -1. When you set it 0, you will get what you want for the objects which are not part of the frame in the starting of the video. You can call this kind of object "foreign objects". You will get foreign object covered with white pixels.
Explore your testing from the information I have mentioned above and share your experience.
I draw a screen with OpenGL commands. And I must save this screen to .bmp or .png format. But I can't do it. I am using glReadpixels but I can't do continue. How can I save this drawing in c++ with OpenGL?
Here it comes! you must include WinGDI.h (which i think the GL will do it!)
void SaveAsBMP(const char *fileName)
{
FILE *file;
unsigned long imageSize;
GLbyte *data=NULL;
GLint viewPort[4];
GLenum lastBuffer;
BITMAPFILEHEADER bmfh;
BITMAPINFOHEADER bmih;
bmfh.bfType='MB';
bmfh.bfReserved1=0;
bmfh.bfReserved2=0;
bmfh.bfOffBits=54;
glGetIntegerv(GL_VIEWPORT,viewPort);
imageSize=((viewPort[2]+((4-(viewPort[2]%4))%4))*viewPort[3]*3)+2;
bmfh.bfSize=imageSize+sizeof(bmfh)+sizeof(bmih);
data=(GLbyte*)malloc(imageSize);
glPixelStorei(GL_PACK_ALIGNMENT,4);
glPixelStorei(GL_PACK_ROW_LENGTH,0);
glPixelStorei(GL_PACK_SKIP_ROWS,0);
glPixelStorei(GL_PACK_SKIP_PIXELS,0);
glPixelStorei(GL_PACK_SWAP_BYTES,1);
glGetIntegerv(GL_READ_BUFFER,(GLint*)&lastBuffer);
glReadBuffer(GL_FRONT);
glReadPixels(0,0,viewPort[2],viewPort[3],GL_BGR,GL_UNSIGNED_BYTE,data);
data[imageSize-1]=0;
data[imageSize-2]=0;
glReadBuffer(lastBuffer);
file=fopen(fileName,"wb");
bmih.biSize=40;
bmih.biWidth=viewPort[2];
bmih.biHeight=viewPort[3];
bmih.biPlanes=1;
bmih.biBitCount=24;
bmih.biCompression=0;
bmih.biSizeImage=imageSize;
bmih.biXPelsPerMeter=45089;
bmih.biYPelsPerMeter=45089;
bmih.biClrUsed=0;
bmih.biClrImportant=0;
fwrite(&bmfh,sizeof(bmfh),1,file);
fwrite(&bmih,sizeof(bmih),1,file);
fwrite(data,imageSize,1,file);
free(data);
fclose(file);
}
Unless you're feeling particularly ambitious (or perhaps masochistic) you probably want to use a library like DevIL that already supports this. The current version can load and/or save in both PNG and BMP formats, along with a few dozen others.
Compared to something like IJG, this is oriented much more heavily toward working with OpenGL or DirectX (e.g., it can load a file fairly directly into an texture or vice versa).
I know you're asking for raster formats, but an indirect way would be to first output vector graphics through gl2ps (http://www.geuz.org/gl2ps/). Examples of usage are provided with the package and on the site (http://www.geuz.org/gl2ps/#tth_sEc3).
Then, the vector output can be converted to the format of your choice using another tool (Inkscape, Image/GraphicsMagick, etc.) or library. An added benefit is you can convert to bitmaps of any resolution in the future.
One thing need to be fixed at:
bmih.biXPelsPerMeter = bmih.biYPelsPerMeter = 0;
Otherwise, some picture edit can not open correctly.