possibility of using several virtual environments simultaneously - python-3.x

hello I noticed that some of the libraries I use are used for development but also for analysis and sometimes I have to use both is there a way to use a basic virtual environment as in the templates and can I use the two libraries a few times simultaneously

Yes.
Create two virtual environments.
Call them v1 and v2
Open terminal x 2
First terminal - activate v1 and install your library 1
Second terminal - activate v2 and install your library 1

Yes of course you can use multiple virtual environments. You need to create virtual envs. If you're already aware of anaconda env, you can create virtual envs with them else you can go in the normal way of creating venvs.
After creating them, have 2 terminals each for each venv. Run your scripts.

Related

Can I have more than one connection in databricks-connect?

I have setup on my PC a miniconda python environment where I have installed the databricks-connect package and configured the tool with databricks-connect configure to connect to a databricks instance I want to use when developing code in the US.
I have a need to connect to a different a different databricks instance for developing code in the EU and I thought I could do this by setting up a different miniconda environment and installing databricks-connect in that environment and setting the configuration in that environment to point to the new databricks instance.
Alas, this did not work. When I look at databricks-connect configure in either miniconda environment, I see the same configuration in both which is the configuration I last configured.
My question therefore is: Is there a way to have multiple databricks-connect connections at the same time and toggle between the two without having to reconfigure each time?
Thank you for your time.
Right now, databricks-connect relies on the central configuration file, and this causes problems. There are two approaches to workaround that:
Use environment variables as described in the documentation, but they should be set somehow, plus you need to have different python environments for different versions of databricks-connect
Specify parameters as spark configuration (see in the same documentation)
For each DB cluster, do following:
separate python environment with name <name> & activate it
install databricks-connect into it
configure databricks-connect
move ~/.databricks-connect into ~/.databricks-connect-<name>
write wrapper script, that will activate python environment & symlink ~/.databricks-connect-<name> into ~/.databricks-connect (I have such script for Zsh, it could be too long to paste it here.)

How does Galaxy Meteor hosting for windows work?

I have a node.js application I have adopted from a more senior developer. I want to deploy it, and I know it will work because he already deployed it several times. I am reading these instructions:
https://galaxy-guide.meteor.com/deploy-quickstart.html
I use windows, as did he.
How does deployment work?
Take these instructions:
Windows If you are using Windows, the commands to deploy are slightly
different. You need to set the environment variable first, then run
the deployment command second (the syntax is the same as everything
you’d put for meteor deploy).
In the case of US East, the commands would be:
$ SET DEPLOY_HOSTNAME=galaxy.meteor.com
$ meteor deploy [hostname]
--settings path-to-settings.json
Am I just supposed to go to the source directory on my laptop and run these commands? What then happens? Is the source uploaded to their server from my laptop and then their magic takes care of the rest?
What about when I want to make a change to the code? Do I just do the same thing, poiting to an existing container and, again, they do the magic?
Am I just supposed to go to the source directory on my laptop and run these commands? What then happens? Is the source uploaded to their server from my laptop and then their magic takes care of the rest?
It is not magic. You basically go to your dev root and enter these commands. Under the hood it builds your app for production (including minification and prod flags for optimization) and once complete opens a connection to the aws infrastructure and pushes the build bundle.
See: https://github.com/meteor/meteor/blob/devel/tools/meteor-services/deploy.js
On the server there will be some install and post install scripts that set up all the environment for you and, if there are no errrors in the process, start your app.
These scripts have if course some automation, depending on your account settings and the commands you have entered.
What about when I want to make a change to the code? Do I just do the same thing, poiting to an existing container and, again, they do the magic?
You will have to rebuild (using the given deploy command) again but Galaxy will take care of the rest.

Can I deactivate a monitor/display with node/electron?

I, I'm writing an app with electron (http://electron.atom.io/). I would like to deactivate the monitor/display of the pc and only activate it again, when something in the app happens (for energy-saving). Is there a way to do this?
The only think I found, is the powerSaveBlocker (http://electron.atom.io/docs/api/power-save-blocker/) which doesn't help me...
You'll need to use native system APIs to do this, on Windows you can use one of the solutions proposed in Turn on/off monitor.
One of the ways you can do that is by executing batch files from electron/node in Windows , shell scripts in Linux and whatever MAC OS uses to execute commands.
These scripts would contain the OS level commands to turn on/off the display which are easily available .
When to fire these scripts would depend on your application logic .

Package Python 3 executable that does not require programming knowledge

I would like to send my Python3 script to my father-in-law and grandmother. Each has their own Windows machine, one is running Windows 7 and the other is running XP.
Not sure how to package it up for them to run on their respective machines. Is there such a method?
My script prompts, while in the IDE environment, for Keyword, path, filename. So there are some inputs, the user has to type in. Not sure if that will affect the portable script creation.
After reading through some responses here on StackOverFlow, I found py2exe does not work with Python 3.
Also Pytonw, suggested here as well, looks very complicated. I don't think either of my relative could carry out those steps.
Lastly CX-Freeze site I get ublock filters-Badware risks and a big warning window when I visit their website.
I've used cx-freeze to deploy python apps compiled to windows .exe files for us by computer novice users for several years and it has worked well. you will occasionally run into issues with dependencies you will have to take extra steps for (Datetime for example) but nothing that isn't surmountable. The easiest way to handle it is to install the folder on the computer yourself and create a desktop shortcut to it for the user. That keeps it simple for them. If you are not close to them you can always use a program like team viewer to gain access to their computer like remote desktop.

Provisioning vs Packaging a box with all the necessary tools in Vagrant

I am trying to set up a development environment with Vagrant. I am using centOS 6. From what I have read about Vagrant, I should set up provisioning scripts to install the packages I need when I run vagrant up. For me, this process takes quite a while. However, it seems like it would be more efficient to install everything one and create a new box. Is there some advantage to provisioning that I'm missing? What is it best for me to do in this case?
You can provision everything and when you want to run vagrant up for the nth time you can do so without provisioning:
vagrant up --no-provision
As to why provisioning? It's mostly so that you can easily take the base box and then change for example one or more items in the list to see the effect.
But it keeps the base clean and reusable.

Resources