How can i preserve the source raster projection when using gdal_translate? - python-3.x

I'm currently working on a small raster refining tool. The goal is, to have a simple CLI tool, to compute tiles from a georeferenced source raster and create a corresponding index.shp. For this I'm using python 3.7 and gdal. The tool runs smoothly and generates the expected tiles and shapefile, but it gets rid of the projection, which is stored in the source raster. Qgis defaults the newly computed tiles to EPSG 4326 while informing me about an unknown projection. The original raster is in EPSG 25832.
My Setup:
Windows 10 64 bit
Python 3.7.2
Gdal I cannot access the specific version, since gdal-config is not installed and I cannot make it work, but it is 64-bit and I installed it through the binaries provided on gisinternals.com. Windows software list says GDAL 204 MSVC 2017.
While running the script, I get error messages telling me about missing files, e.g. pcs.csv, datum.csv ellipsoid.csv and so on. This indicates that having those files, would fix my problem.
But oddly enough, I have used Osgeo4W to install, python 2.7 with gdal and it works like a charm, of course having adjusted the python parts. Tiles get calculated and stay in the projection of the source. Without any external files which specify a projection, in fact using the exact same data which is really confusing to me.
To my understanding, there is no flag or option which forces gdal to keep the projection. If have overlooked or missunderstood the docs im glad for advice.
Before anyone asks, i know that using the osgeo4w installer is obviously the easy and working solution here. But keeping in mind that python 2.7 will soon be discontinued and also using this as a chance to learn new things i wanted to build a 3.7 based tool with gdal installed on my machine
The corresponding code looks like this and does the following :
1.) Command string is build
2.) string is handed to os.system, which in turn executes accordingly
for i in range(0, width, tilelenght):
y = 0
for j in range(0, height, tilelenght):
gdaltranString = f'gdal_translate -of GTIFF -srcwin {i}, {j}, {tilelenght}, {tilelenght} {input_filepath} {output_filepath}{x}_{y}.tif'
subprocess.run(gdaltranString)
y = y+1
x = x+1
The expected result, would be a collection of functional .tif files which have the EPSG code of the source file, in this case 25832.
But as already mentioned, the projection gets lost somewhere in the process.

So,i have found the solution to my problem, without really understanding how it became an issue to begin with.
The solution was to create an user variable GDAL_DATA with the path to the projection definition files.
The weird thing is, i now have GDAL_DATA, as system variable and user variable, both pointing to the same directory.
If someone knows more about the mysterious ways of windows system variables, please share your wisdom, or the source of said wisdom.

Related

No module named 'adsk' problem - Python script for Autodesk Fusion 360

I want to draw 3D spheres using x,y,z coordinate.
Then I am trying to use Python script for Autodesk Fusion 360(CAD).
However, the error has occurred as the image.
"No module named 'adsk' problem"
I tried to install the adsk module, but I cant intall.
Then I found that the adsk is in definition folder.
So I tried to run the file.
But I cant get result.
1) As it described here
it may be required to call from Tools Panel of Fusion.
2) Can you try to copy your adsk folder to C:\your_path_to_python_folder\Lib\site-packages\?
I think, in your case (based on screenshot, you published in post), just copy C:\Users\Wr\AppData\Roaming\Autodesk\Autodesk Fusion 360\API\Python\defs\adsk as C:\Python3.7\Lib\site-packages\adsk or C:\Users\Wr\AppData\Local\Programs\Python\Python37\Lib\site-packages\adsk
3) Otherwise, try to add as PYTHONPATH environment variable location of %AppData%\Roaming\Autodesk\Autodesk Fusion 360\API\Python or C:\Users\Wr\AppData\Roaming\Autodesk\Autodesk Fusion 360\API\Python
4) Otherwise: It may need to launch scripts from certain "environment" via terminal as is described here
In your case, probably:
cd "%AppData%\Roaming\Autodesk\Autodesk Fusion 360\API\Python"
# or
# cd "C:\Users\Wr\AppData\Roaming\Autodesk\Autodesk Fusion 360\API\Python"
.\python.exe "_here_is_address_to_your_script.py"
P.S. I can't comment, thus published as an answer.

OpenModelica: No output variables or solution file

So I am a newbie to OpenModelica. I have a bit of experience using LMS Amesim. I created my first simple model using OM and simulated it from within the OMeditor.
When I switch to the plot window, there are NO output variables to plot. That tells me that the simulation may not have run. However, no error messages popped up. When I checked the model, I found it to be fine (not overconstrained or underconstrained).
What gives? This is OM 1.14 on Linux Ubuntu 16.04.
My Modelica file is a simple 2nd order system with feedback control is available via pastebin here or may be downloaded here via google drive link
The messages that I have from the output window are:
/tmp/OpenModelica_drN/OMEdit/Feedback/Feedback -port=35318 -logFormat=xmltcp -override=startTime=0,stopTime=100,stepSize=0.2,tolerance=1e-6,solver=dassl,outputFormat=csv,variableFilter=.* -r=/tmp/OpenModelica_drN/OMEdit/Feedback/Feedback_res.csv -w -lv=LOG_STATS -inputPath=/tmp/OpenModelica_drN/OMEdit/Feedback -outputPath=/tmp/OpenModelica_drN/OMEdit/Feedback
The initialization finished successfully without homotopy method.
The simulation finished successfully.
This was a bug. Should be fixed now:
https://trac.openmodelica.org/OpenModelica/ticket/5251

How to get SMAC3 working for Python 3x on Windows

This is a great package for Bayesian optimization of hyperparameters (especially mixed integer/continuous/categorical...and has shown to be better than Spearmint in benchmarks). However, clearly it is meant for Linux. What do I do...?
First you need to download swig.exe (the whole package) and unzip it. Then drop it somewhere and add the folder to path so that the installer for SMAC3 can call swig.exe.
Next, the Resource module is going to cause issues because that is only meant for Linux. That is specifically used by Pynisher. You'll need to comment out import pynisher in the execute_func.py module. Then, set use_pynisher:bool=False in the def __init__(self...) in the same module. The default is true.
Then, go down to the middle of the module where an if self.use_pynisher....else statement exists. Obviously our code now enters the else part, but it is not setup correctly. Change result = self.ta(config, **obj_kwargs) to result = self.ta(list(config.get_dictionary().values())). This part may need to be adjusted yet depending on what kind of inputs your function handles, but essentially you can see that this will enable the basic example shown in the included branin_fmin.py module. If doing the random forest example, don't change at all...etc.

How to build and run Light Table without error?

I've been trying for hours, but can't build and use Light Table. Every time I try to run deploy/LightTable, it hangs on a screen that simply says "Light Table". I receive this error*:
[14381:0519/204037:INFO:CONSOLE(27860)] "Uncaught TypeError: Cannot read property 'thread_STAR_' of undefined", source: file:///home/zaz/Desktop/LightTable/builds/lighttable-0.8.0-linux/resources/app/core/node_modules/lighttable/bootstrap.js (27860)
Here's what I've tried:
git clone https://github.com/LightTable/LightTable.git
cd LightTable
bash linux_deps.sh
./deploy/LightTable # creates frozen window, throws the error above
cd deploy
./LightTable # creates frozen window, throws the error above
./ltbin # creates frozen window, throws the error above
I also tried similar things after checking out the atom-shell branch and the 0.7.2 tag (and cleaning up all the files from the previous build). Each time, I received the error above.
Does anyone know what's going on here?
Has Light Table been completely abandoned? It seems the last commit was in March.
* Depending on the version I was trying to run, I also received other errors, but I don't think they're relevant (the error above was the only one that appeared for all versions):
[18593:0519/222845:INFO:gpu_info_collector_x11.cc(80)] NVCtrl extension does not exist.
[18593:0519/222845:ERROR:browser_main_loop.cc(226)] Gdk: gdk_window_set_icon_list: icons too large
Fontconfig warning: FcPattern object size does not accept value 11(i)
[14413:0519/204035:INFO:renderer_main.cc(212)] Renderer process started
A year later (question was written in May 2015, now is June 2016), LightTable 0.8.1 is out, and I tried both Linux binary and cloned it from git, and it works fine.
For complete info, I'm using also Atom, and although I had not problems with "Cannot read property 'something' of undefined"" in Atom core, I met such problems in two or three Atom packages.
Both editors are based on same electron platform, LightTable is beautiful eye candy with quite revolutionary REPL integration, but it needs more polish to be usable to same extent as Atom.
For example, LightTable does not have workspace saving by default, that is done via plugin. That's ridiculous.
But although Atom looks so nice and powerful compared to simple editors, with realy huuuge number of available packages/plugins, LightTable is more elegant.
As I don't want to start new semi-religious war Atom vs LightTable resembling vi-vs-emacs, I'll stop here. :)
I can't replicate your problems in LightTable v0.8.1, so I think that answers this question. If not, please add errors you get with v0.8.1.
For info about releases, please check: https://github.com/LightTable/LightTable/releases

Perl: libapt-pkg-perl AptPkg::Cache->new strange behaviour under precise

I have a very strange problem with the constructor of AptPkg::Cache object in the precise package of libapt-pkg-perl (v. 0.1.25).
The perl script is designed to download a debian package for three different architectures (i386, armel, armhf). For each architecture I do the following:
Configure AptPkg::Config '$_config' with the right parameters and package-lists for the desired architecture.
Create the cache object with AptPkg::Cache->new .
Call the method AptPkg::Cache->policy to create the AptPkg::Policy object.
Call the method AptPkg::Policy->candidate("program-name") .
Download the package for the selected architecture.
This works very well with Ubuntu Lucid, but with Ubuntu Precise I can only download the package for the first architecture defined. For the other two architectures there will be no installation candidate (method AptPkg::Policy->candidate("Package-Name") doesn't return an object).
I tried to build a workaround and I found one solution how the script works for all three architectures, without problems, in precise:
If I create the cache object (with AptPkg::Cache->new) twice in a row it works and the script downloads the debian package for all three architectures:
my $cache = AptPkg::Cache->new;
$cache = AptPkg::Cache->new;
I'm sure that the problem has something to do with the method AptPkg::Cache->new because I checked everything else, what could cause the problem, twice. All config-variables are set correctly and I even get a different Hash for AptPkg::Cache->new for each architecture, but it seems that I am overlooking something important.
I'm not very familiar with perl, so I am asking you guys if someone can explain why the script works with the workaround but not without it. Further it looks quite strange if you have the same line of code twice in your script.
Maybe you hit this bug - https://bugs.launchpad.net/ubuntu/+source/libapt-pkg-perl/+bug/994509
There is a script there to test if you're affected. If it's something else consider submitting a bug report.
edit: Just saw this is 11 months old :/

Resources