I am writing a software with PySide6. On my Mac the package has a size of 1.0GiB. Is there a way to easily reduce unnecessary files that I don't need to package.
I manually identified the files below as not necessary for my software. Still I end up with more than 500MB.
/Assistant.app
/Designer.app
/Linguist.app
/lupdate
/QtWebEngineCore
/QtWebEngineCore.framework
You can install from PyPi only the PySide6-Essentials package.
You can build from source and include via Qt installer just what you need.
P.s if you are stuggeling with building PySide from source I have a repo that might help.
Related
This is the scenario:
I am using Python 3 (3.6 through 3.8 on Windows 10, using Pipenv and vanilla Python) to create an SQLite file with Spatial support and several triggers based on Spatial Indices.
Creating the database and adding records works just fine after loading Spatialite
conn.enable_load_extension(True)
conn.load_extension("mod_spatialite")
However, adding spatial links with code such as below
SELECT CreateSpatialIndex( 'nodes' , 'geometry' );"""
returns the following error
updateTableTriggers: "no such module: rtree"
I tried compiling the rtree extension following some recommendation from
Compiling SQLite RTREE in MSVC?
and using VS 2016 (16.4.2).
But I get all sorts of errors when trying to load that in SQL (Might not be compiling it properly, but I tried multiple things and nothing worked). My best attempt was a successful compilation using pretty much the instructions I referred to above, but when I attempted
p.conn.load_extension("libSqliteRtree.dll")
I got
sqlite3.OperationalError: The specified procedure could not be found.
I am really at loss here, as there seems to be very little discussion on this topic everywhere I looked. A few questions that come to mind are:
Are the specific compilation instructions/tricks/compiler versions that I should be using?
Is it even possible to compile and load rtree in Python 3 using the standard sqlite3 library?
Is this particular to Windows?
Are there alternative SQLite Python packages that could do the job (I didn't find any on PyPI)?
It is critical, however, that the solution works across different platforms.
I was just having the exact same problem, with Python 3.8 x64. I believe the problem was that the sqlite3.dll file inside my Python's installation DLLs folder had been compiled without RTREE enabled (https://sqlite.org/rtree.html).
To resolve this I visited the SQLite website, downloaded the .zip with the latest sqlite3.dll for Windows x64 (hoping RTREE would be enabled on that version, because I tried compiling it on my own and it didn't work), and swapped the old DLL in the DLLs folder with the newly downloaded DLL from the website. The RTREE error was gone! Detailed steps below:
Access https://sqlite.org/download.html and choose the "Precompiled Binaries for Windows" of your system. Mine was x64 because I was running Python x64. Download the .zip and unzip it.
Find your Python's installation folder (the folder which contains the python.exe you're running), and open the DLLs folder. You'll see there is a file there called sqlite3.dll. That's the one that comes with the Python installation.
Copy the sqlite3.dll from the unzipped folder in step 1 and paste into the DLLs folder, click Yes to substitute the file in the DLLs folder for the new one. That should solve the problem.
I edited Keras .optimizer and .layers modules locally, but Colab uses its own Keras & TensorFlow libraries. Uploading then using the edited libs would be rather involved per pathing and package interactions, and an overkill for a few small edits.
The closest I've got to accessing a module is keras.optimizers.__file__, which gives a relative path I don't know what to do with: '/usr/local/lib/python3.6/dist-packages/keras/optimizers.py'
Can Colab libraries be edited? Permanently (not per-runtime)?
Colab now allows direct access to system files from the GUI itself. There one can view and edit all the installed libraries like one would have done on their pc itself.
Go to the Files icon in the left Sidebar. Go to the Up Folder. From there go to the path
usr/local/lib/python3.6/dist-packages
Here, find the package and make your edit.
Then restart the runtime, from Runtime/Restart Runtime option in the menu.
You could fork the libraries on GitHub, push your changes to a new branch and then do.
!pip install git+https://github.com/your-username/keras.git#new-branch
Or even a specific commit
!pip install git+https://github.com/your-username/keras.git#632560d91286
You will need to restart your runtime for the changes to work.
More details here.
Per-runtime solution
import keras.optimizers
with open('optimizers.txt','r') as writer_file:
contents_to_write = writer_file.read()
with open(keras.optimizers.__file__,'w') as file_to_overwrite:
file_to_overwrite.write(contents_to_write)
>>Restart runtime (do not 'Reset all runtimes')
To clarify, (1) save edited module of interest as a .txt, (2) overwrite Colab module with the saved module via .__file__, (3) 'Reset all runtimes' restores Colab modules - use if module breaks
Considering its simplicity, it's as good as a permanent fix. For possibly better scalability, see fizzybear's solution.
I'm working with some new libraries and I'm afraid that my script might show some troubles in the future with unexpected updates of packages. So I want to create a new environment but I don't want to manually install all the basic packages like numpy, pandas, etc. So, does it makes sense to create a new environment using conda which is the exact copy of my base environment or could it create some sort of conflict?
Copying using conda works, but if you used only virtualenv, you should manually build requirements.txt, create a new virtual environment, activate it, and then simply use pip install -r requirements.txt. Note the key word - manually.
For example if you needed requests, numpy and pandas, your requirements.txt would look like this:
requests==2.20.0
numpy==1.15.2
pandas==0.23.4
You could actually exclude numpy in this case, but you still keep it as you are using it and if you removed pandas you'd still need it. I build it by installing a new package and then using pip freeze to find the module I just installed and put it into the requirements.txt with current version. Of course if I ever get to the state where I will share it with someone, I replace == with >=, most of the time that's enough, if it conflicts, you need to check what the conflicting library requires, and adjust if possible, e.g. you put in latest numpy version as requirement, but older library needs specifically x.y.z version and your library is perfectly fine with that version too (ideal case).
Anyway, this is how much you have to keep around to preserve your virtual environment, also helps if you are going to distribute your project, as anyone can drop this file into a new folder with your source and create their own environment without any hassle.
Now, this is why you should build it manually:
$ pip freeze
certifi==2018.10.15
chardet==3.0.4
idna==2.7
numpy==1.15.2
pandas==0.23.4
python-dateutil==2.7.3
pytz==2018.5
requests==2.20.0
six==1.11.0
urllib3==1.24
virtualenv==16.0.0
six? pytz? What? Other libraries use them but we don't even know what they are for unless we look it up, and they shouldn't be listed as project dependencies, they will be installed if they depend on it.
This way you ensure that there won't be too many problems only in very rare cases where one library you are using needs a new version of another library but the other library wants an ancient version of the library of which the version is conflicting and in that case it's a big mess, but normally it doesn't happen.
I need the source package for the mingw64/mingw-w64-x86_64-gcc-libs-8.2.0-3 package. I believe this is generated from the mingw-w64-x86_64-gcc source package. Looking in the repository I can find mingw-w64-gcc-7.3.0-2.src.tar.gz but nothing for gcc-8.*.
Most other packages have a simple relationship between binary and source package names. In a few cases (GCC is one) a single source package is used to generate multiple binary packages. However the naming is usually fairly obvious, and version numbers stay the same. I can't find any GCC-related source packages with the "8.2.0" version number.
Does anyone know where to get the source for the gcc-libs mingw package from?
You might want to look at here. You can find GCC 8.1.0.
Open an issue on https://github.com/Alexpux/MINGW-packages/issues
Apparently the maintainer forgot to upload the source package, or some script is failing.
I'm trying to build a custom ICU with a minimal data set. I've tried to follow the instructions at Reducing the Size of ICU's Data: Conversion Tables, but many of the files referenced don't exist in the ICU 4.8.1 source distribution. Specifically, I cannot find any files that match ucm*.mk.
I've also tried creating reslocal.mk files as indicated in e.g. ICU's source/data/lang/resfiles.mk. That did not help either. My build is the typical:
$ ./configure --prefix=/some/dir
$ make
$ make install
Regardless of what I do, libicudata.so.48.1 is about 17M. It shouldn't matter, but I'm building on Ubuntu 11.04.
See the note at the top of that page: (see
Note that ICU for C by default comes with pre-built data. The source
data files are not included unless ICU is downloaded from the
source repository. Alternatively, the Data Customizer may be used to
customize the pre-built data.
Your ICU is reading the prebuilt package from icu/source/data/in/*.dat and ignoring the .mk files. We have had requests for the source data to be included as a downloadable .zip and so we plan to do this in the future.
If you have any suggestions for how our instructions can be made more clear, please file a bug. I've added a copy of that notice to the section you referenced.