I have found this python code online (twitter_map_clustered.py) which (I think) help create a map using the geodata of different tweets.:
from argparse import ArgumentParser
import folium
from folium.plugins import MarkerCluster
import json
def get_parser():
parser = ArgumentParser()
parser.add_argument('--geojson')
parser.add_argument('--map')
return parser
def make_map(geojson_file, map_file):
tweet_map = folium.Map(Location=[50, 5], max_zoom=20)
marker_cluster = MarkerCluster().add_to(tweet_map)
geodata= json.load(open(geojson_file))
for tweet in geodata['features']:
tweet['geometry']['coordinates'].reverse()
marker = folium.Marker(tweet['geometry']['coordinates'], popup=tweet['properties']['text'])
marker.add_to(marker_cluster)
#Save to HTML map file
tweet_map.save(map_file)
if __name__ == '__main__':
parser = get_parser()
args = parser.parse_args()
make_map(args.geojson, args.map)
I managed to extract the geo information of different tweets and save it into a geo_data.json file. However, I have trouble understanding the code, specially the function def get_parser().
It seems that we need to add argument when running the file in the command prompt. The argument should be geo_data.json. However, it is also asking for a map ? parser.add_argument('--map')
Why is it the case? In the code, aren't we creating the map here?
#Save to HTML map file
tweet_map.save(map_file)
Can you please help me. How would you run the python script ? Is there anything important I am missing ?
As explained by argparse documentation, it simply asks for the name of the geojson file and a name that your code will use to save the map.
Therefore, you will run:
python twitter_map_clustered.py --geojson geo_data.json --map mymap.html
and you will get a map named mymap.html.
Related
I have been given an existing script (let's call it existing.py) that in its MVCE form has the following structure.
import argparse
FLAGS = None
def func():
print(FLAGS.abc)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--abc',
type=str,
default='',
help='abc.'
)
FLAGS, unparsed = parser.parse_known_args()
func()
As this is part of tool that gets constantly updated, I cannot change existing.py. Normally, existing.py is invoked with commandline arguments.
python -m existing.py --abc "Ok"
which prints the output 'Ok'.
I wish to call the functions (not the whole script) in existing.py using another script. How can I feed in the FLAGS object that is used in the functions of the script? I do not wish to use subprocess will just run the script in its entirety.
I know that argparse creates the FLAGS as a Namespace dictionary and I can construct it in calling.py (see code below) but I cannot then push it back into the function that is imported from existing.py into calling.py. The following is the calling.py that I've tried.
from existing import func
import argparse
args = argparse.Namespace()
args.abc = 'Ok'
FLAGS = args
func()
which throws an error
AttributeError: 'NoneType' object has no attribute 'abc'
This is different from other StackOverflow questions as this question explicitly forbids subprocess and the existing script cannot be changed.
Import existing and use
existing.FLAGS = args
Now functions defined in the existing namespace should see the desired FLAGS object.
Hello i am writing cmd tool,
i want to have behaviour like docker does:
docker container run --help
print the help for this particular command.
im stucked with code:
parser.add_argument("method", help=getHelp())
but the method can be anything like
add
remove
update
and how to later add a method in add like:
add ram
add cpu
i can add subparser for add but how to add later a subparser for ram ?
How can i achieve that with argparse in python?
Is it even possible?
Can somebody show me example of third deep command with its own arguments ?
import argparse
import pprint
import random
def get_comments(args):
return [{'post_id': args.post_id,
'comment_id': str(random.randrange(1, 1000)),
'comment': "< comment's body >"}
for _ in range(random.randrange(1, 10))]
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers(dest='command')
list_parser = subparsers.add_parser('list')
list_subparsers = list_parser.add_subparsers(dest='type')
comments_parser = list_subparsers.add_parser('comments')
comments_parser.add_argument('post_id')
comments_parser.set_defaults(func=get_comments)
accounts_parser = list_subparsers.add_parser('accounts')
show_parser = subparsers.add_parser('show')
args = parser.parse_args()
print(args)
print(args.command)
#result = args.func(args)
print(parser)
#pprint.pprint(args)
I have a list of modules that should be imported automatically and in a dynanamic way.
Here is a snippet from my code:
for m in modules_to_import:
module_name = dirname(__file__)+ "/" +m
spec = importlib.util.spec_from_file_location("package", module_name)
imported_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(imported_module)
I measured the time and it becomes slower and slower after each import. Is there some solution to this or why does it become slower? Thanks a lot!
I have not timed it but why dont you simplify your code. Looking at your code, you want to import module that are in the same directory as that file. By default, when you import a module, that the first place it look for.
First let's create some files to import all in the same directory:
First.py
def display_first():
print("I'm first")
Second.py
def display_second():
print("I'm second")
Third.py
def display_third():
print("I'm third")
So one way to do it is putting your modules in a dict that you can use afterwards. I'm using here a dict comprehension here to build that dict:
Solution1.py
import importlib
modules_to_import = ["First", "Second", "Third"]
modules_imported = {x: importlib.import_module(x) for x in modules_to_import}
modules_imported["First"].display_first()
modules_imported["Second"].display_second()
modules_imported["Third"].display_third()
Or if you really want to use to use the dotted notation to access a module's content, your could use a named tuple to help:
Solution2.py
import importlib
import collections
modules_to_import = ["First", "Second", "Third"]
modules_imported = collections.namedtuple("imported_modules", modules_to_import)
for next_module in modules_to_import:
setattr(modules_imported, next_module, importlib.import_module(next_module))
modules_imported.First.display_first()
modules_imported.Second.display_second()
modules_imported.Third.display_third()
Problem statement
I want the options supported in a python module to be overridable with an .yaml file, because in some cases there are too many options to be specified with non-default values.
I implemented the logic as follows.
parser = argparse.ArgumentParser()
# some parser.add statements that comes with default values
parser.add_argument("--config_path", default=None, type=str,
help="A yaml file for overriding parameters specification in this module.")
args = parser.parse_args()
# Override parameters
if args.config_path is not None:
with open(args.config_path, "r") as f:
yml_config = yaml.safe_load(f)
for k, v in yml_config.items():
if k in args.__dict__:
args.__dict__[k] = v
else:
sys.stderr.write("Ignored unknown parameter {} in yaml.\n".format(k))
The problem is, for some options I have specific functions/lambda expressions to convert the input strings, such as:
parser.add_argument("--tokens", type=lambda x: x.split(","))
In order to apply corresponding functions when parsing option specifications in YAML, adding so many if statements does not seem a good solution. Maintaining a dictionary that changes accordingly when new options are introduced in parser object seems redundant. Is there any solution to get the type for each argument in parser object?
If the elements that you add to the parser with add_argument start with -- then they
are actually optional and usually called options. You can find these walking over
the result of the _get_optonal_actions() method of the parser instance.
If you config.yaml looks like:
tokens: a,b,c
, then you can do:
import sys
import argparse
import ruamel.yaml
sys.argv[1:] = ['--config-path', 'config.yaml'] # simulate commandline
yaml = ruamel.yaml.YAML(typ='safe')
parser = argparse.ArgumentParser()
parser.add_argument("--config-path", default=None, type=str,
help="A yaml file for overriding parameters specification in this module.")
parser.add_argument("--tokens", type=lambda x: x.split(","))
args = parser.parse_args()
def find_option_type(key, parser):
for opt in parser._get_optional_actions():
if ('--' + key) in opt.option_strings:
return opt.type
raise ValueError
if args.config_path is not None:
with open(args.config_path, "r") as f:
yml_config = yaml.load(f)
for k, v in yml_config.items():
if k in args.__dict__:
typ = find_option_type(k, parser)
args.__dict__[k] = typ(v)
else:
sys.stderr.write("Ignored unknown parameter {} in yaml.\n".format(k))
print(args)
which gives:
Namespace(config_path='config.yaml', tokens=['a', 'b', 'c'])
Please note:
I am using the new API of ruamel.yaml. Loading this way is actually faster
than using from ruamel import yaml; yaml.safe_load() although your config files
are probably not big enough to notice.
I am using the file extension .yaml, as recommended in the official FAQ. You should do so as well, unless you cannot (e.g. if your
filesystem doesn't allow that).
I use a dash in the option string --config-path instead of an underscore, this is
somewhat more natural to type and automatically converted to an underscore to get valid
identifier name
You might want to consider a different approach where you parse sys.argv by hand for
--config-path, then set the defaults for all the options from the YAML config file and
then call .parse_args(). Doing things in that order allow you to override, on
the commandline, what has a value in the config file. If you do things your way, you always
have to edit a config file if it has all correct values except for one.
I am using Python3.4.2 and pythonOCC-0.16.0-win32-py34.exe to draw components. Every components are rendered properly with one defined color but that is not look like a real world component.
Above image is my Python implementation which generate 3D image from STEP file with one color.
Below image is rendered one of my windows software and there I have used Step file. I want to render component same as look like in below image so its look like a real world component.
Is there any way to get correct colored output in Python by read STEP file? I have searched a lot but didn't get a way to implement it. Please help me to go in forward direction.
from future import print_function
import sys
#from OCC.STEPCAFControl import STEPCAFControl_Reader
from OCC.STEPControl import STEPControl_Reader
from OCC.IFSelect import IFSelect_RetDone, IFSelect_ItemsByEntity
from OCC.Display.SimpleGui import init_display
from OCC.Display.WebGl import threejs_renderer
step_reader = STEPControl_Reader()
status = step_reader.ReadFile('./models/test.STEP')
if status == IFSelect_RetDone: # check status
failsonly = False
step_reader.PrintCheckLoad(failsonly, IFSelect_ItemsByEntity)
step_reader.PrintCheckTransfer(failsonly, IFSelect_ItemsByEntity)
ok = step_reader.TransferRoot(1)
_nbs = step_reader.NbShapes()
aResShape = step_reader.Shape(1)
else:
print("Error: can't read file.")
sys.exit(0)
#display, start_display, add_menu, add_function_to_menu = init_display()
#display.DisplayShape(aResShape, update=True)
#start_display()
my_renderer = threejs_renderer.ThreejsRenderer(background_color="#123345")
my_renderer.DisplayShape(aResShape)
The above code is used for read STEP file using Python.