I have 2 streams right now, and in my second stream I want to import certain files from the other stream based on their file extensions.
If I set it up using the following statements:
import from_second_stream/... //second_stream/....xml
import from_second_stream/... //second_stream/....json
It successfully imports all the files in the correct place, but it strips the file extensions.
For instance, I have a file in the second stream in this path:
//second_stream/test/myTest.json
Which should get imported as:
from_second_stream/test/myTest.json
But instead becomes:
from_second_stream/test/myTest
What am I doing wrong?
According to Perforce support:
The path should be:
import from_second_stream/....json //second_stream/....json
However it is not allowed to have embedded wildcards in stream views, so you will not be able to use that. Instead you need to specify each file in the view:
import from_second_stream/test/myTest.json //second_stream/test/myTest.json
So we ended up having to restructure our build system to accommodate this...
Related
I am tring to use nltk in one of my folder but it can't find it
what i try to use:
import nltk
nltk.data.path.append("nltk_data")
in a one of my files
The file tree:
main_azure_folder:
share_code:
text_analysis.py
nltk_data
What is the current way to path it?
Thank you
Please have a look of the below doc if you want to import custom extension in your function, it tells you how to import:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#import-behavior
For example, if you want to import dog.py file, you can use it like this:
from . import dog
If it is a folder, then the files in the folder in the current trigger folder cannot be used as the source of import(must be level with the trigger), like this:
In this situation, we need to use this(If you want to import something to a folder you created yourself, such as test folder, use this method as well):
from ..test import cat
I am doing PCA on CIFAR 10 image on IBM WATSON Studio Free version so I uploaded the python file for downloading the CIFAR10 on the studio
pic below.
But when I trying to import cache the following error is showing.
pic below-
After spending some time on google I find a solution but I can't understand it.
link
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/add-script-to-notebook.html
the solution is as follows:-
Click the Add Data icon (Shows the Add Data icon), and then browse the script file or drag it into your notebook sidebar.
Click in an empty code cell in your notebook and then click the Insert to code link below the file. Take the returned string, and write to a file in the file system that comes with the runtime session.
To import the classes to access the methods in a script in your notebook, use the following command:
For Python:
from <python file name> import <class name>
I can't understand this line
` and write to a file in the file system that comes with the runtime session.``
Where can I find the file that comes with runtime session? Where is the file system located?
Can anyone plz help me in this with the details where to find that file
You have the import error because the script that you are trying to import is not available in your Python runtime's local filesystem. The files (cache.py, cifar10.py, etc.) that you uploaded are uploaded to the object storage bucket associated with the Watson Studio project. To use those files you need to make them available to the Python runtime for example by downloading the script to the runtimes local filesystem.
UPDATE: In the meanwhile there is an option to directly insert the StreamingBody objects. This will also have all the required credentials included. You can skip to writing it to a file in the local runtime filesystem section of this answer if you are using insert StreamingBody object option.
Or,
You can use the code snippet below to read the script in a StreamingBody object:
import types
import pandas as pd
from botocore.client import Config
import ibm_boto3
def __iter__(self): return 0
os_client= ibm_boto3.client(service_name='s3',
ibm_api_key_id='<IBM_API_KEY_ID>',
ibm_auth_endpoint="<IBM_AUTH_ENDPOINT>",
config=Config(signature_version='oauth'),
endpoint_url='<ENDPOINT>')
# Your data file was loaded into a botocore.response.StreamingBody object.
# Please read the documentation of ibm_boto3 and pandas to learn more about the possibilities to load the data.
# ibm_boto3 documentation: https://ibm.github.io/ibm-cos-sdk-python/
# pandas documentation: http://pandas.pydata.org/
streaming_body_1 = os_client.get_object(Bucket='<BUCKET>', Key='cifar.py')['Body']
# add missing __iter__ method, so pandas accepts body as file-like object
if not hasattr(streaming_body_1, "__iter__"): streaming_body_1.__iter__ = types.MethodType( __iter__, streaming_body_1 )
And then write it to a file in the local runtime filesystem.
f = open('cifar.py', 'wb')
f.write(streaming_body_1.read())
This opens a file with write access and calls the write method to write to the file. You should then be able to simply import the script.
import cifar
Note: You can get the credentials like IBM_API_KEY_ID for the file by clicking on the Insert credentials option on the drop-down menu for your file.
The instructions that op found miss one crucial line of code. I followed them and was able to import modules but wasn't able to use any functions or classes in those modules. This was fixed by closing the files after writing. This part in the instrucitons:
f = open('<myScript>.py', 'wb')
f.write(streaming_body_1.read())
should instead be (at least this works in my case):
f = open('<myScript>.py', 'wb')
f.write(streaming_body_1.read())
f.close()
Hopefully this helps someone.
In the current python program I'm working on, I need to access a lot of stored data. I store it in the form of a bunch of dictionaries, each in their own file. Each file has a single command: giveArchive(). So to access one of the files, I use:
import fileName
return fileName.giveArchive()
And this has worked well so far, but as the number of files I need grows, I want to streamline this a little bit. I'd like to store all of these files in the same folder, and that folder in the same directory as my main file. Is there some way I can import every file in a folder? And if I do, how can I use 'giveArchive()' from specific files in it?
You can do something like:
from folder.subfolder.deepersubfolder import filename
return filename.giveArchive()
this assumes folder can be accessed from the directory your script is running in
I want to see what's happening with a specific operation in a python3 package I've been working on. I use pycallgraph and it looks great. But I can't figure out how to remove an entire tree of calls from the output.
I made a quick script make_call_graphs.py:
import doms.client.schedule as sched
from pycallgraph import PyCallGraph
from pycallgraph.output import GraphvizOutput
from pycallgraph import Config
from pycallgraph import GlobbingFilter
config = Config()
config.trace_filter = GlobbingFilter(exclude=[
'_find_and_load',
'_find_and_load.*', # Tried a few similar variations
'_handle_fromlist',
'_handle_fromlist.*',
])
with PyCallGraph(output=GraphvizOutput(output_file='schedule_hourly_call_graph.png'), config=config):
sched.hourly()
Before I started using the GlobbingFilter, _find_and_load was at the top of the tree outside of my doms library call stack. It seems that the filter only removes the top level block, but every subsequent call remains in the output. (See BEFORE and AFTER below)
Obviously I can read the result and copy every single call I don't want to see into the filter, but that is silly. What can I do to remove that whole chunk of stuff outside my doms box? Is there a RecursiveFilter or something I could use?
BEFORE:
AFTER:
The solution was much easier than I originally thought and right in front of me: the include kwarg given to the GlobbingFilter.
config.trace_filter = GlobbingFilter(include=['__main__', 'doms.*'])
I am trying to write a python library, where some files depend on other files, for example:
I have folder structure:
../libname
../libname/core.py
../libname/supplementary1.py
../libname/supplementary2.py
../libname/__init__.py
where libname is where I import from.
the core.py file begins with:
import supplementary1
import supplementary2
...some code...
and this works fine, if I test it in the main of the core.py
Let's say I want to use libname as library in my project. My folder structure is then:
./libname
./main.py
where main.py calls functions from core.py, which in fact need functions from supplementary1 and supplementary2.
Currently, it throws me an error, saying there is no supplementary1, if I try (in main.py)
from core.py import function1
My question is, how do I import files from my library then? I mean one option would be to copy all the code from e.g. supplementary1 to the core.py, but I wish to maintain my code elegantly separated, if possible.
So in other words, how does one import a file, which already imports some files from a local library?
Thank you very much.
In import ... and from ... import ... you need to write not the filename, but module name. Instead of core.py you should say libname.core, meaning "module core, from package libname" (libname will be searched in all module paths, that normally includes the directory of the script you've started, i.e. where your main.py is).
tl;dr: a simple answer to your question is to write from libname.core import function1 instead.
Also, I'd suggest to use relative imports and instead of import supplementary1 write from . import supplementary1 - here, from . means "from the current package - where this file (module) resides in".
Consider reading Python documentation on modules - there are a lot of examples and explanations there.