Python- pyramid doesn't run local host - pyramid

I am trying to install and run python pyramid.
I installed anaconda, created a virtual environment and used pip install "pyramid==1.7.3" to install pyramid.
Then executed
`from wsgiref.simple_server import make_server
from pyramid.config import Configurator
from pyramid.response import Response
def hello_world(request):
return Response('Hello %(name)s!' % request.matchdict)
if __name__ == '__main__':
config = Configurator()
config.add_route('hello', '/hello/{name}')
config.add_view(hello_world, route_name='hello')
app = config.make_wsgi_app()
server = make_server('0.0.0.0', 8080, app)
server.serve_forever()`
this helloworld.py. This doesn't start the server and when I open the localhost:8080 in the browser, it gave me a 404 Not Found The resource could not be found. Error.
here is the Command Prompt
What am I missing here?

For newbies to web application development in Python, I recommend working through the Quick Tutorial. Do not skip over any steps, including Requirements. It provides a good overview of creating web applications in Python and its ecosystem, with references for further in-depth reading.
Anaconda is targeted toward the data science audience. In our documentation, we don't provide specific instructions for how to work within Anaconda, so you would have to learn that part on your own.

It was serving at ' http://localhost:8080/hello/world ' not in the 'localhost:8080'

Related

Use pcl in ROS with python

I'm new to the ROS and i have a problem.
I'm trying to visualize a point cloud using pcl library.
At first i start my ros-realsense camera typing in a terminal "roslaunch realsense2_camera rs_camera.launch filters:=pointcloud"
Then, i have made a catkin package where i have a listener.py script where i subscribe to the realsense and i get the point cloud information that i want. So far so good!
Now i want to visualize this point cloud using the PCL library but when i run my package "rosrun pcl listener.py" i get the error
import pcl
ImportError: No module named pcl
So my question is what do i miss?
How do i import the pcl in a ros package?
Do i have to add something on the CMakeLists.txt or/and package.xml?
I include my listener.py script.
#!/usr/bin/env python
import rospy
import pcl
import ros_numpy
import sensor_msgs.point_cloud2 as pc2
from sensor_msgs.msg import PointCloud2
def callback_ptclud(ptcloud_data):
pc = ros_numpy.numpify(data)
points=np.zeros((pc.shape[0],3))
points[:,0]=pc['x']
points[:,1]=pc['y']
points[:,2]=pc['z']
p = pcl.PointCloud(np.array(points, dtype=np.float32))
def listener():
rospy.Subscriber("/camera/depth/color/points", PointCloud2, callback_ptclud)
rospy.spin()
if __name__ == '__main__':
rospy.init_node("realsense_subscriber", anonymous=True)
listener()
Thank you in advance
Any help is appreciated
About the pcl the most easiest way is to use it with C++. Its easy to install and there are a lot of documentations and examples.
About the python_pcl for some reason its not easy to install (at least for me). What i did to use it was to download the library, put it somewhere in my pc and when i want to use it in my scripts i just import the absolute path of pcl.
By downloading the python_pcl files i mean find and download the init.py, _pcl.py, _pcl.so and all the necessary files of the library.
For example in my python scripts i use:
import sys
sys.path.append('~/path_to_downloaded_pcl') "it will find the __init__.py"
import pcl

NLTK is called and got error of "punkt" not found on databricks pyspark

I would like to call NLTK to do some NLP on databricks by pyspark.
I have installed NLTK from the library tab of databricks. It should be accessible from all nodes.
My py3 code :
import pyspark.sql.functions as F
from pyspark.sql.types import StringType
import nltk
nltk.download('punkt')
def get_keywords1(col):
sentences = []
sentence = nltk.sent_tokenize(col)
get_keywords_udf = F.udf(get_keywords1, StringType())
I run the above code and got:
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Package punkt is already up-to-date!
When I run the following code:
t = spark.createDataFrame(
[(2010, 1, 'rdc', 'a book'), (2010, 1, 'rdc','a car'),
(2007, 6, 'utw', 'a house'), (2007, 6, 'utw','a hotel')
],
("year", "month", "u_id", "objects"))
t1 = t.withColumn('keywords', get_keywords_udf('objects'))
t1.show() # error here !
I got error:
<span class="ansi-red-fg">>>> import nltk
PythonException:
An exception was thrown from the Python worker. Please see the stack trace below.
Traceback (most recent call last):
LookupError:
**********************************************************************
Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/PY3/english.pickle
Searched in:
- '/root/nltk_data'
- '/databricks/python/nltk_data'
- '/databricks/python/share/nltk_data'
- '/databricks/python/lib/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- ''
I have downloaded 'punkt'. It is located at
/root/nltk_data/tokenizers
I have updated the PATH in spark environment with the folder location.
Why it cannot be found ?
The solution at NLTK. Punkt not found and this How to config nltk data directory from code?
but none of them work for me.
I have tried to updated
nltk.data.path.append('/root/nltk_data/tokenizers/')
it does not work.
It seems that nltk cannot see the new added path !
I also copied punkz to the path where nltk will search for.
cp -r /root/nltk_data/tokenizers/punkt /root/nltk_data
but, nltk still cannot see it.
thanks
When spinning up a Databricks single node cluster this will work fine. Installing nltk via pip and then using the nltk.download module to get the prebuilt models/text works.
Assumptions: User is programming in a Databricks notebook with python as the default language.
When spinning up a multinode cluster there are a couple of issues you will run into.
You are registering a UDF that relies on code from another module. In order for this to UDF to work on every node in the cluster the module needs to be installed at the cluster level (i.e. nltk installed on driver and all worker nodes). The module can be installed like this via an init script at cluster start time or installed via the libraries section in the Databricks Compute section. More on that here...(I also give code examples below)
https://learn.microsoft.com/enus/azure/databricks/libraries/cluster-libraries.
Now when you run the UDF the module will exist on all nodes of the cluster.
Using nltk.download() to get data that the module references. When we do nltk.download() in a multinode cluster interactively it will only download to the driver node. So when your UDF executes on the other nodes those nodes will not contain the needed references in the specified paths that it looks in by default. To see these paths default paths run nltk.data.path.
To overcome this there are two possibilities I have explored. One of them works.
(doesn't work) Using an init script, install nltk, then in that same init script call nltk.download via a one-liner bash python expression after the install like...
python -c 'import nltk; nltk.download('all');'
I have run into the issue where the nltk is installed but not found after it has installed. I'm assuming virtual environments are playing a role here.
(works) Using an init script, install nltk.
Create the script
dbutils.fs.put('/dbfs/databricks/scripts/nltk-install.sh', """
#!/bin/bash
pip install nltk""", True)
Check it out
%sh
head '/dbfs/databricks/scripts/nltk-install.sh'
Configure cluster to run init script on start up
Databricks Cluster Init Script Config
In the cluster configuration create the environment variable NLTK_DATA="/dbfs/databricks/nltk_data/". This is used by the nltk package to search for data/model dependencies.
Databricks Cluster Env Variable Config
Start the cluster.
After it is installed and the cluster is running check to maker sure the environment variable was correctly created.
import os
os.environ.get("NLTK_DATA")
Then check to make sure that nltk is pointing towards the correct paths.
import nltk
nltk.data.path
If '/dbfs/databricks/nltk_data/ is within the list we are good to go.
Download the stuff you need.
nltk.download('all', download_dir="/dbfs/databricks/nltk_data/")
Notice that we downloaded the dependencies to Databricks storage. Now every node will have access to the nltk default dependencies. Because we specified the environment variable NLTK_DATA at cluster creation time when we import nltk it will look in that directory. The only difference here is that we now pointed nltk to our Databricks storage which is accessible by every node.
Now since the data exists in mounted storage at cluster start up we shouldn't need to redownload the data every time.
After following these steps you should be all set to play with nltk and all of its default data/models.
I recently encountered the same issue when using NLTK in a Glue job.
Adding the 'missing' file to all nodes resolved the issue for me. I'm not sure if it will help in databricks but is worth a shot.
sc.addFile('/tmp/nltk_data/tokenizers/punkt/PY3/english.pickle')
Drew Ringo's suggestion almost worked for me.
If you're using a multinode cluster in Databricks, you will face the problems Ringo mentioned. For me a much simpler solution was running the following init_script:
dbutils.fs.put("dbfs:/databricks/scripts/nltk_punkt.sh", """#!/bin/bash
pip install nltk
python -m nltk.downloader punkt""",True)
Make sure to add the filepath under Advanced options -> Init Scripts found within the Cluster Configuration menu.
The first of Drew Ringo's 2 possibilities will work if your cluster's init_script looks like this:
%sh
/databricks/python/bin/pip install nltk
/databricks/python/bin/python -m nltk.downloader punkt
He is correct to assume that his original issue relates to virtual environments.
This helped me to solve the issue:
import nltk
nltk.download('all')

Kivy Installation Guide for Windows 10

I've been trying to follow online youTube videos to install kivy on my Windows 10 computer (python-3.7.5-amd64, kivy 1.11.1). Aside from the fact that they seem to have different variations on how they approach the topic, I am unable to get a solution that operates satisfactorily.
These are the steps I am following:
I install python (python-3.7.5-amd64.exe) to C:\Python37
I modify the path to include to include the following: C:\Python37\Scripts;C:\Python37;C:\Python37\Libs;C:\Python37\DLLs;C:\Python37\Lib\site-packages;
I added the following environment variable PYTHONPATH = C:\Python37\DLLs;C:\Python37\Libs;C:\Python37;C:\Python37\Scripts;C:\Python37\Lib\site-packages;
I open a command window and type in the following commands (taken from kivy.org)
python -m pip install --upgrade pip wheel setuptools virtualenv
python -m pip install docutils pygments pypiwin32 kivy_deps.sdl2==0.1.* kivy_deps.glew==0.1.*
python -m pip install kivy_deps.gstreamer==0.1.*
python -m pip install kivy_deps.angle==0.1.*
python -m pip install kivy==1.11.1
python -m pip install kivy_examples==1.11.1
I try to run a simple program. From within Windows Explorer I right click the code file (label.py) and from the shortcut menu select python.
A windows pops up for an instant and a directory called __pycache__ gets created with kivy.cpython-37.pyc. Double clicking that causes the program to run.
Is it possible to have a easier solution in which the source code, once compiled executes?
If I open a command prompt and attempt to execute the source code using the command python label.py I get the following:
Traceback (most recent call last):
File "label.py", line 1, in <module>
from kivy.app import App
File "C:\Users\chrib\Google Drive\_Software\Python_Kivy\kivy.py", line 1, in <module>
from kivy.base import runTouchApp
ModuleNotFoundError: No module named 'kivy.base'; 'kivy' is not a package
Why should this happen?
Also is it possible to have a cleaner development environment. I am used to Visual Studio IDE and it would be great if I can use this environment.
Thanks
Code for label.py
from kivy.app import App
from kivy.uix.label import Label
class MyApp(App):
def build(self):
return Label(text='Hello world!');
if __name__=='__main__':
MyApp().run();
I've been trying to follow online youTube videos to install kivy on my Windows 10 computer
Have you tried simply following the instructions on kivy.org? There's no need to use youtube videos, the installation is largely a normal python module install.
I try to run a simple program. From within Windows Explorer I right click the code file (label.py) and from the shortcut menu select python.
Don't do this, run the file by opening a command prompt and typing python yourfilename.py. That way you will see the full traceback for any errors that occur.
A windows pops up for an instant and a directory called pycache gets created with kivy.cpython-37.pyc. Double clicking that causes the program to run.
It sounds likely that the first run is crashing. As above, you want to get the information about why.
Is it possible to have a easier solution in which the source code, once compiled executes?
When you run the code it does execute. As above, it's probably crashing.
ModuleNotFoundError: No module named 'kivy.base'; 'kivy' is not a package
Have you made a file named kivy.py? It looks likely that you have, and that this file is being imported in preference to the installed kivy module.
Also is it possible to have a cleaner development environment. I am used to Visual Studio IDE and it would be great if I can use this environment.
I'm not sure what you consider unclean about your development environment, but you should think in terms of python environments and their installed packages. Kivy is just a python module that you install into a python environment. When you use an IDE, it may integrate with one or more python environments (with options to switch between them). There's nothing special about using Visual Studio with Kivy, just do whatever you normally do to use it with Python.
I figured it out. I had a program in the code directory called kivy.py. I renamed that and it worked.

PIP installed pywin32 service fails: Incorrect function

Running Python 3.6.5, PyWin32 223.
After install I have included the the C:\Python32\lib\win32 folder in path.
I am just running the normal testService shell that seems to be all over the internet. Can be found below.
import win32serviceutil
import win32service
import win32event
import servicemanager
import socket
class AppServerSvc (win32serviceutil.ServiceFramework):
_svc_name_ = "TestService"
_svc_display_name_ = "Test Service"
def __init__(self,args):
win32serviceutil.ServiceFramework.__init__(self,args)
self.hWaitStop = win32event.CreateEvent(None,0,0,None)
socket.setdefaulttimeout(60)
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
def SvcDoRun(self):
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STARTED,
(self._svc_name_,''))
self.main()
def main(self):
pass
if __name__ == '__main__':
win32serviceutil.HandleCommandLine(AppServerSvc)
I can "compile" and install the service fine, as soon as I run it I get an error to check the system logs. When I do I get the following error.
The Test Service Process service terminated with the service-specific error Incorrect function..
I have not been able to find any help through google from this decade. I am new to the library and the only help I found was to add its lib to the path. If its a path error, its no longer that. Anyone have any ideas?
Thanks in advance!
It turns out that it was a permission issue. I request root for 30 seconds and worked like a charm. Just FYI if anyone is having this problem.
In my case the problem was in the way I run the python module. Instead of executing the Python script, I use
$ python -m module
This works correctly when running in the appropriate directory but when running as a service, the module could not be found. So the solution if executing modules directly with Python is to pip install the module so that it can be found by the service.
In my case was due to the fact that the python script I was trying to run as a service wasn't in the same drive of the python installation. Also check this useful answer.

Python 3: No module named zlib?

I am trying to run my Flask application with an Apache server using mod_wsgi, and it has been a bumpy road, to say the least.
It has been suggested that I should try to run my app's .wsgi file using Python to make sure it is working.
This is the contents of the file:
#!/usr/bin/python
activate_this = '/var/www/Giveaway/Giveaway/venv/bin/activate_this.py'
with open(activate_this) as f:
code = compile(f.read(), "somefile.py", 'exec')
exec(code)
import sys
import logging
logging.basicConfig(stream=sys.stderr)
sys.path.insert(0,"/var/www/Giveaways/")
from Giveaways import application
application.secret_key = 'Add your secret key'
However, when I run it, I get this error:
ImportError: No module named 'zlib'
And no, I am not using some homebrewed version of Python - I installed Python 3 via apt-get.
Thanks for any help.
does the contents of somefile.py include the gzip package? in which case you may have to install gzip package via pip or similar

Resources