ValueError: data_class [Twist] is not a class - python-3.x

I am trying to get values from .yaml file for subscription and writing bag file in ROS. But i encounter with this error and i couldn't solve it. I guess I can't define my values as a class name in code.
#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
import rospy
import rosbag
from geometry_msgs.msg import Twist
filename='test.bag'
bag = rosbag.Bag(filename, 'w')
sensor_info=rospy.get_param("/bag/sensor_info")
move_datatype=sensor_info[1]['datatype'] # Twist
move_topic=sensor_info[1]['topic_name'] # /cmd_vel
def move_callback(msg):
x=msg
def main():
global x
rospy.init_node('rosbag_save', anonymous=True)
rospy.Subscriber(move_topic,move_datatype(),move_callback)
while not rospy.is_shutdown():
bag.write(f'{move_topic}',x)
if __name__ == '__main__':
try:
main()
except rospy.ROSInterruptException:
pass
here is my code
bag:
sensor_info:
- {"topic_name": "/multi_scan", "datatype": "LaserScan"}
- {"topic_name": "/cmd_vel", "datatype": "Twist"}
and here is my .yaml file

You're getting this error because when you read the rosparam in from the server it will be stored as a string; which you can't pass to a subscriber. Instead you should convert it to the object type after reading it in. There are a couple of ways to do this, but using globals() will probably be the easiest. Essentially this returns a dictionary of everything in the global symbol table and from there you can use a key(a string) to get a value(class type). This can be done such as:
sensor_info = rospy.get_param("/bag/sensor_info")
move_datatype_string = sensor_info[1]['datatype'] # Twist
move_topic = sensor_info[1]['topic_name'] # /cmd_vel
move_datatype = globals()[move_datatype_string]

Related

Unsuccessful in trying to convert a column of strings to integers in Python (hoping to sort)

I am attempting to sort a dataframe by a column called 'GameId', which are currently of type string and when I attempt to sort the result is unexpected. I have tried the following but still return a type string.
TEST['GameId'] = TEST['GameId'].astype(int)
type('GameId')
One way to make the data life easier is using dataclasses!
from dataclasses import dataclass
# here will will be calling the dataclass decorator to send hints for data type!
#dataclass
class Columns:
channel_id : int
frequency_hz : int
power_dBmV : float
name : str
# this class will call the data class to organise the data as data.frequency data.power_dBmV etc
class RadioChannel:
radio_values = ['channel_id', 'frequency', 'power_dBmV']
def __init__(self, data): # self is 'this' but for python, it just means that you mean to reference 'this' or self instance
self.data = data # this instances data is called data here
data = Columns(channel_id=data[0], frequency=data[1], power_dBmv=data[4], name=data[3]) # now we give data var a val!
def present_data(self):
# this is optional class method btw
from rich.console import Console
from rich.table import Table
console = Console()
table = Table(title="My Radio Channels")
for item in self.radio_values:
table.add_column(item)
table.add_row(data.channel_id, data.frequency_hz, data.power_dBmv)
console.print(table)
# ignore this if its confusing
# now inside your functional part of your script
if __name__ == '__main__':
myData = []
# calling an imaginary file here to read
with open("my_radio_data_file", 'r') as myfile:
mylines = myfile.readlines()
for line in myline:
myData.append(line)
myfile.close()
#my data would look like a string ["value", value, 00, 0.0, "hello joe, from: world"]
ch1 = radioChannel(data=myData[0])
ch1.present_data()
This way you can just call the class object on each line of a data file. and print it to see if it lines up. once you get the hang of it, it starts to get fun.
I used rich console here, but it works well with pandas and normal dataframes!
dataclasses help the interpreter find its way with type hints and class structure.
Good Luck and have fun!

How to write a simple test code to test a python program that has a function with two arguments?

I'm new in python, I have written a python program that reads a list of files and saves the total number of a particular character (ch) in a dictionary and then returns it.
The program works fine, now I'm trying to write a simple test code to test the program.
I tried with the following code,
def test_read_files():
assert read_files("H:\\SomeTextFiles\\zero-k.txt", 'k') == 0, "Should be 0"
if __name__ == "__main__":
test_read_files()
print("Everything passed")
I named the program as test_read_files.py
My python code is as follows:
# This function reads a list of files and saves number of
# a particular character (ch) in dictionary and returns it.
def read_files(filePaths, ch):
# dictionary for saing no of character's in each file
dictionary = {}
for filePath in filePaths:
try:
# using "with statement" with open() function
with open(filePath, "r") as file_object:
# read file content
fileContent = file_object.read()
dictionary[filePath] = fileContent.count(ch)
except Exception:
# handling exception
print('An Error with opening the file '+filePath)
dictionary[filePath] = -1
return dictionary
fileLists = ["H:\\SomeTextFiles\\16.txt", "H:\\SomeTextFiles\\Statement1.txt",
"H:\\SomeTextFiles\\zero-k.txt", "H:\\SomeTextFiles"]
print(read_files(fileLists, 'k'))
I named it as read_files.py
When I run the test code, getting an error: NameError: name 'read_files' is not defined
The program and the test code all are in the same folder (different than the python folder though).
Hopefully I am understanding this correctly, but if both of you python files:
test_read_files.py
read_files.py
Are in the same directory.. Then you should be able to just add at the top of the test_read_files.py the following import command:
from read_files import read_files
This will import the read_files function from your read_files.py script and that way you will be able to run it inside the other file.

Extract item for each spider in scrapy project

I have over a dozen spiders in a scrapy project with variety of items being extracted from different sources, including others elements mostly i have to copy same regex code over and over again in each spider for example
item['element'] = re.findall('my_regex', response.text)
I use this regex to get same element which is defined in scrapy items, is there a way to avoid copying? where do i put this in project so that i don't have to copy this in each spider and only add those that are different.
my project structure is default
any help is appreciated thanks in advance
So if I understand your question correctly, you want use the same regular expression across multiple spiders.
You can do this:
create a python module called something like regex_to_use
inside that module place your regular expression.
example:
# regex_to_use.py
regex_one = 'test'
You can access this express this one in your spiders.
# spider.py
import regex_to_use
import re as regex
find_string = regex.search(regex_to_use.regex_one, ' this is a test')
print(find_string)
# output
<re.Match object; span=(11, 15), match='test'>
You could also do something like this in your regex_to_use module
# regex_to_use.py
import re as regex
class CustomRegularExpressions(object):
def __init__(self, text):
"""
:param text: string containing the variable to search for
"""
self._text = text
def search_text(self):
find_xyx = regex.search('test', self._text)
return find_xyx
and you would call it this way in your spiders:
# spider.py
from regex_to_use import CustomRegularExpressions
find_word = CustomRegularExpressions('this is a test').search_text()
print(find_word)
# output
<re.Match object; span=(10, 14), match='test'>
If you have multiple regular expressions you could do something like this:
# regex_to_use.py
import re as regex
class CustomRegularExpressions(object):
def __init__(self, text):
"""
:param text: string containing the variable to search for
"""
self._text = text
def search_text(self, regex_to_use):
regular_expressions = {"regex_one": 'test_1', "regex_two": 'test_2'}
expression = ''.join([v for k, v in regular_expressions.items() if k == regex_to_use])
find_xyx = regex.search(expression, self._text)
return find_xyx
# spider.py
from regex_to_use import CustomRegularExpressions
find_word = CustomRegularExpressions('this is a test').search_text('regex_one')
print(find_word)
# output
<re.Match object; span=(10, 14), match='test'>
You can also use a staticmethod in the class CustomRegularExpressions
# regex_to_use.py
import re as regex
class CustomRegularExpressions:
#staticmethod
def search_text(regex_to_use, text_to_search):
regular_expressions = {"regex_one": 'test_1', "regex_two": 'test_2'}
expression = ''.join([v for k, v in regular_expressions.items() if k == regex_to_use])
find_xyx = regex.search(expression, text_to_search)
return find_xyx
# spider.py
from regex_to_use import CustomRegularExpressions
# find_word would be replaced with item['element']
# this is a test would be replaced with response.text
find_word = CustomRegularExpressions.search_text('regex_one', 'this is a test')
print(find_word)
# output
<re.Match object; span=(10, 14), match='test'>
If you use docstrings in the function search_text() you can see the regular expressions in the Python dictionary.
Showing how all this works...
This is a python project that I wrote and published. Take a look at the folder utilities. In this folder I have functions that I can use throughout my code without having to copy and paste the same code over and over.
There is a lot of common data that is usual to use across multiple spiders, like regex or even XPath.
It's a good idea to isolate them.
You can use something like this:
/project
/site_data
handle_responses.py
...
/spiders
your_spider.py
...
Isolate functionalities with a common purpose.
# handle_responses.py
# imports ...
from re import search
def get_specific_commom_data(text: str):
# probably is a good idea handle predictable errors here (`try except`)
return search('your_regex', text)
And just use where is needed that functionality.
# your_spider.py
# imports ...
import scrapy
from site_data.handle_responses import get_specific_commom_data
class YourSpider(scrapy.Spider):
# ... previous code
def your_method(self, response):
# ... previous code
item['element'] = get_specific_commom_data(response.text)
Try to keep it simple and do what you need to solve your problem.
I can copy regex in multiple spiders instead of importing object from other .py files, i understand they have the use case but here i don't want to add anything to any of the spiders but still want the element in result
There are some good answers to this but don't really solve the problem so after searching for days i have come to this solution i hope its useful for others looking for similar answer.
#middlewares.py
import yourproject.items import youritem()
#find the function and add your element
def process_spider_output(self, response, result, spider):
item = YourItem()
item['element'] = re.findall('my_regex', response.text)
now uncomment middleware from
#settings.py
SPIDER_MIDDLEWARES = {
'yourproject.middlewares.YoursprojectMiddleware': 543,
}
For each spider you will get element in result data, i am still searching for better solution and i will update the answer because it slows the spider,

What's get_products() missing 1 required positional argument: 'self'

I am trying to program for a friend of mine for fun and practice to make myself better in Python 3.6.3, I don't really understand why I got this error.
TypeError: get_products() missing 1 required positional argument: 'self'
I have done some research, it says I should initialize the object, which I did, but it is still giving me this error. Can anyone tell me where I did wrong? Or is there any better ways to do it?
from datetime import datetime, timedelta
from time import sleep
from gdax.public_client import PublicClient
# import pandas
import requests
class MyGdaxHistoricalData(object):
"""class for fetch candle data for a given currency pair"""
def __init__(self):
print([productList['id'] for productList in PublicClient.get_products()])
# self.pair = input("""\nEnter your product name separated by a comma.
self.pair = [i for i in input("Enter: ").split(",")]
self.uri = 'https://api.gdax.com/products/{pair}/candles'.format(pair = self.pair)
#staticmethod
def dataToIso8681(data):
"""convert a data time object to the ISO-8681 format
Args:
date(datetime): The date to be converted
Return:
string: The ISO-8681 formated date
"""
return 0
if __name__ == "__main__":
import gdax
MyData = MyGdaxHistoricalData()
# MyData = MyGdaxHistoricalData(input("""\nEnter your product name separated by a comma.
# print(MyData.pair)
Possibly you missed to create object of PublicClient. Try PublicClient().get_products()
Edited:
why I need the object of PublicClient?
Simple thumb rule of OOP's, if you wanna use some property(attribute) or behavior(method) of class, you need a object of that class. Else you need to make it static, use #staticmethod decorator in python.

Using the globals argument of timeit.timeit

I am attempting to run timeit.timeit in the following class:
from contextlib import suppress
from pathlib import Path
import subprocess
from timeit import timeit
class BackupVolume():
'''
Backup a file system on a volume using tar
'''
targetFile = "bd.tar.gz"
srcPath = Path("/BulkData")
excludes = ["--exclude=VirtualBox VMs/*", # Exclude all the VM stuff
"--exclude=*.tar*"] # Exclude this tar file
#classmethod
def backupData(cls, targetPath="~"): # pylint: disable=invalid-name
'''
Runs tar to backup the data in /BulkData so we can reorganize that
volume. Deletes any old copy of the backup repository.
Parameters:
:param str targetPath: Where the backup should be created.
'''
# pylint: disable=invalid-name
tarFile\
= Path(Path(targetPath /
cls.targetFile).resolve())
with suppress(FileNotFoundError):
tarFile.unlink()
timeit('subprocess.run(["tar", "-cf", tarFile.as_posix(),'
'cls.excludes[0], cls.excludes[1], cls.srcPath.as_posix()])',
number=1, globals=something)
The problem I have is that inside timeit() it cannot interpret subprocess. I believe that the globals argument to timeit() should help but I have no idea how to specify the module namespace. Can someone show me how?
I think in your case globals = globals() in the timeit call would work.
Explanation
The globals argument specifies a namespace in which to execute the code. Due to your import of the subprocess module (outside the function, even outside the class) you can use globals(). In doing so you have access to a dictionary of the current module, you can find more info in the documentation.
Super simple example
In this example I'll expose 3 different scenarios.
Need to access globals
Need to access locals
Custom namespace
Code to follow the example:
import subprocess
from timeit import timeit
import math
class ExampleClass():
def performance_glob(self):
return timeit("subprocess.run('ls')", number = 1, globals = globals())
def performance_loc(self):
a = 69
b = 42
return timeit("a * b", number = 1, globals = locals())
def performance_mix(self):
a = 69
return timeit("math.sqrt(a)", number = 1, globals = {'math': math, 'a': a})
In performance_glob you are timing something that needs a global import, the module subprocess. If you don't pass the globals namespace you'll get an error message like this NameError: name 'subprocess' is not defined
On the contrary, if you pass globals() to the function that depends on local values performance_loc the needed variables for the timeit execution a and b won't be in the scope. That's why you can use locals()
The last one is a general scenario where you need both the local vars in the function and general imports. If you keep in mind that the parameter globals can be specified as a dictionary, you just need to provide the necessary keys, you can customize it.

Resources