Extracting Certain Contents from a list in python - python-3.x

We are working on a project where we need to retrieve main contents from a particular list. Since our project needs stanford parser, we used
hello = "This is a sample text"
result = list(sdp.raw_parse(hello))
we tried printing it, which was like
In: print(result[0])
Out: {0: {'address': 0,
'ctag': 'TOP',
'deps': defaultdict(<class 'list'>, {'root': [5]}),
'feats': None,
'head': None,
'lemma': None,
'rel': None,
'tag': 'TOP',
'word': None},
1: {'address': 1,
'ctag': 'DT',
'deps': defaultdict(<class 'list'>, {}),
'feats': '_',
'head': 5,
'lemma': '_',
'rel': 'nsubj',
'tag': 'DT',
'word': 'This'},
2: {'address': 2,
'ctag': 'VBZ',
'deps': defaultdict(<class 'list'>, {}),
'feats': '_',
'head': 5,
'lemma': '_',
'rel': 'cop',
'tag': 'VBZ',
'word': 'is'},
3: {'address': 3,
'ctag': 'DT',
'deps': defaultdict(<class 'list'>, {}),
'feats': '_',
'head': 5,
'lemma': '_',
'rel': 'det',
'tag': 'DT',
'word': 'a'},
4: {'address': 4,
'ctag': 'NN',
'deps': defaultdict(<class 'list'>, {}),
'feats': '_',
'head': 5,
'lemma': '_',
'rel': 'compound',
'tag': 'NN',
'word': 'sample'},
5: {'address': 5,
'ctag': 'NN',
'deps': defaultdict(<class 'list'>,
{'compound': [4],
'cop': [2],
'det': [3],
'nsubj': [1]}),
'feats': '_',
'head': 0,
'lemma': '_',
'rel': 'root',
'tag': 'NN',
'word': 'text'}})
what i actually want is to print just the
'word' : 'this'
Please do help me with a solution, Thanks in advance.

print(result[0][1]['word']) will work straight forward but if you dont know which index hold 'this' value, you should iterate throuh loop plus put some codition.
This can be accomplished by list comprehension or you can use map.
Please put some more detail in order to get what you required.

Related

How to extract replies from Facebook comments stored in list of dictionaries?

I'm working with a Facebook scraping, however, I'm having difficulties working with the responses to the comments.
For the collection of comments, this is the code:
import pandas as pd
import facebook_scraper
post_ids = ['1014199301965488']
options = {"comments": True,
"reactors": True,
"allow_extra_requests": True,
}
cookies = "/content/cookies.txt" #it is necessary to generate a Facebook cookie file
replies = []
for post in facebook_scraper.get_posts(post_urls=post_ids, cookies=cookies, options=options):
for p in post['comments_full']:
replies.append(p)
Basically, each comment can have more than one answer. From what I understand, each answer is stored in a list of dictionaries. Here is an example of some replies.
[{'comment_id': '1014587065260045', 'comment_url': 'https://facebook.com/1014587065260045', 'commenter_id': '100002664042251', 'commenter_url': 'https://facebook.com/anderson.ritmoapoesia?fref=nf&rc=p&__tn__=R', 'commenter_name': 'Anderson Ritmoapoesia', 'commenter_meta': None, 'comment_text': 'Boa irmão!\nTmj', 'comment_time': datetime.datetime(2015, 8, 17, 0, 0), 'comment_image': 'https://scontent.xx.fbcdn.net/m1/v/t6/An_UvxJXg9tdnLU3Y5qjPi0200MLilhzPXUgxzGjQzUMaNcmjdZA6anyrngvkdub33NZzZhd51fpCAEzNHFhko5aKRFP5fS1w_lKwYrzcNLupv27.png?_nc_eui2=AeH0Z9O-PPSBg9l8FeLeTyUHMiCX3WNpzi0yIJfdY2nOLeM4yQsYnDi7Fo-bVaW2oRmOKEYPCsTFZnVoJbmO2yOH&ccb=10-5&oh=00_AT-4ep4a5bI4Gf173sbCjcAhS7gahF9vcYuM9GaQwJsI9g&oe=6301E8F9&_nc_sid=55e238', 'comment_reactors': [{'name': 'Marcio J J Tomaz', 'link': 'https://facebook.com/marcioroberto.rodriguestomaz?fref=pb', 'type': 'like'}], 'comment_reactions': {'like': 1}, 'comment_reaction_count': 1}]
[{'comment_id': '1014272461958172', 'comment_url': 'https://facebook.com/1014272461958172', 'commenter_id': '100009587231687', 'commenter_url': 'https://facebook.com/cassia.danyelle.94?fref=nf&rc=p&__tn__=R', 'commenter_name': 'Cassia Danyelle', 'commenter_meta': None, 'comment_text': 'Concordo!', 'comment_time': datetime.datetime(2015, 8, 17, 0, 0), 'comment_image': None, 'comment_reactors': [], 'comment_reactions': None, 'comment_reaction_count': None}, {'comment_id': '1014275711957847', 'comment_url': 'https://facebook.com/1014275711957847', 'commenter_id': '1227694094', 'commenter_url': 'https://facebook.com/marcusvinicius.espiritosanto?fref=nf&rc=p&__tn__=R', 'commenter_name': 'Marcus Vinicius Espirito Santo', 'commenter_meta': None, 'comment_text': 'Concordo Marcão a única observação que faço é: a justiça deveria funcionar sempre dessa forma rápida e precisa, como neste caso.', 'comment_time': datetime.datetime(2015, 8, 17, 0, 0), 'comment_image': 'https://scontent.xx.fbcdn.net/m1/v/t6/An_UvxJXg9tdnLU3Y5qjPi0200MLilhzPXUgxzGjQzUMaNcmjdZA6anyrngvkdub33NZzZhd51fpCAEzNHFhko5aKRFP5fS1w_lKwYrzcNLupv27.png?_nc_eui2=AeH0Z9O-PPSBg9l8FeLeTyUHMiCX3WNpzi0yIJfdY2nOLeM4yQsYnDi7Fo-bVaW2oRmOKEYPCsTFZnVoJbmO2yOH&ccb=10-5&oh=00_AT-4ep4a5bI4Gf173sbCjcAhS7gahF9vcYuM9GaQwJsI9g&oe=6301E8F9&_nc_sid=55e238', 'comment_reactors': [{'name': 'Marcos Alexandre de Souza', 'link': 'https://facebook.com/senseimarcos?fref=pb', 'type': 'like'}], 'comment_reactions': {'like': 1}, 'comment_reaction_count': 1}]
[{'comment_id': '1014367808615304', 'comment_url': 'https://facebook.com/1014367808615304', 'commenter_id': '100005145968202', 'commenter_url': 'https://facebook.com/flavioluis.schnurr?fref=nf&rc=p&__tn__=R', 'commenter_name': 'Flavio Luis Schnurr', 'commenter_meta': None, 'comment_text': 'E porque você não morre ! Quem apoia assassinos também é!', 'comment_time': datetime.datetime(2015, 8, 17, 0, 0), 'comment_image': None, 'comment_reactors': [], 'comment_reactions': None, 'comment_reaction_count': None}]
[{'comment_id': '1014222638629821', 'comment_url': 'https://facebook.com/1014222638629821', 'commenter_id': '100009383732423', 'commenter_url': 'https://facebook.com/profile.php?id=100009383732423&fref=nf&rc=p&__tn__=R', 'commenter_name': 'Anerol Ahnuc', 'commenter_meta': None, 'comment_text': 'Hã?', 'comment_time': datetime.datetime(2015, 8, 17, 0, 0), 'comment_image': 'https://scontent.xx.fbcdn.net/m1/v/t6/An_UvxJXg9tdnLU3Y5qjPi0200MLilhzPXUgxzGjQzUMaNcmjdZA6anyrngvkdub33NZzZhd51fpCAEzNHFhko5aKRFP5fS1w_lKwYrzcNLupv27.png?_nc_eui2=AeH0Z9O-PPSBg9l8FeLeTyUHMiCX3WNpzi0yIJfdY2nOLeM4yQsYnDi7Fo-bVaW2oRmOKEYPCsTFZnVoJbmO2yOH&ccb=10-5&oh=00_AT-4ep4a5bI4Gf173sbCjcAhS7gahF9vcYuM9GaQwJsI9g&oe=6301E8F9&_nc_sid=55e238', 'comment_reactors': [], 'comment_reactions': {'like': 1}, 'comment_reaction_count': 1}, {'comment_id': '1014236578628427', 'comment_url': 'https://facebook.com/1014236578628427', 'commenter_id': '100009383732423', 'commenter_url': 'https://facebook.com/profile.php?id=100009383732423&fref=nf&rc=p&__tn__=R', 'commenter_name': 'Anerol Ahnuc', 'commenter_meta': None, 'comment_text': 'Eu hein?', 'comment_time': datetime.datetime(2015, 8, 17, 0, 0), 'comment_image': None, 'comment_reactors': [], 'comment_reactions': None, 'comment_reaction_count': None}]
[{'comment_id': '1014435731941845', 'comment_url': 'https://facebook.com/1014435731941845', 'commenter_id': '100003779689547', 'commenter_url': 'https://facebook.com/marcia.pimentel.5454?fref=nf&rc=p&__tn__=R', 'commenter_name': 'Márcia Pimentel', 'commenter_meta': None, 'comment_text': 'Não é que sejam defensores Marcondes Martins,sim,eles falam que ele era um ser humano que errou e que podia ter pago de outra maneira,e não com a morte,porque só quem tem direito de tirar a vida das pessoas é Aquele que nos deu... Jesus.', 'comment_time': datetime.datetime(2015, 8, 17, 0, 0), 'comment_image': None, 'comment_reactors': [], 'comment_reactions': None, 'comment_reaction_count': None}, {'comment_id': '1014445965274155', 'comment_url': 'https://facebook.com/1014445965274155', 'commenter_id': '100000515531313', 'commenter_url': 'https://facebook.com/DJ.Marcondes.Martins?fref=nf&rc=p&__tn__=R', 'commenter_name': 'Marcondes Martins', 'commenter_meta': None, 'comment_text': 'Marcia Márcia Pimentel ta teoria é tudo bonitinho. Mas bandidos matam, estupram, humilham pessoas de bem e a justiça ainda protege esses vermes, a sociedade ja está cansada disso.', 'comment_time': datetime.datetime(2015, 8, 17, 0, 0), 'comment_image': None, 'comment_reactors': [], 'comment_reactions': None, 'comment_reaction_count': None}]
Based on the data above, I only need the values for 'comment_text', however, I've never worked with this type of structure. Is it possible to extract each occurrence in 'comment_text'?
Since you're working with a list of dictionaries, I would use a list comprehension to loop the items in the list, and then extract only the key I wanted from each dictionary:
replies.append([reply['comment_text'] for reply in p])
An example of what it would do
p = [{'comment_id': '1014272461958172', 'comment_url': 'https://facebook.com/1014272461958172', 'commenter_id': '100009587231687', 'commenter_url': 'https://facebook.com/cassia.danyelle.94?fref=nf&rc=p&__tn__=R', 'commenter_name': 'Cassia Danyelle', 'commenter_meta': None, 'comment_text': 'Concordo!', 'comment_time': datetime.datetime(2015, 8, 17, 0, 0), 'comment_image': None, 'comment_reactors': [], 'comment_reactions': None, 'comment_reaction_count': None}, {'comment_id': '1014275711957847', 'comment_url': 'https://facebook.com/1014275711957847', 'commenter_id': '1227694094', 'commenter_url': 'https://facebook.com/marcusvinicius.espiritosanto?fref=nf&rc=p&__tn__=R', 'commenter_name': 'Marcus Vinicius Espirito Santo', 'commenter_meta': None, 'comment_text': 'Concordo Marcão a única observação que faço é: a justiça deveria funcionar sempre dessa forma rápida e precisa, como neste caso.', 'comment_time': datetime.datetime(2015, 8, 17, 0, 0), 'comment_image': 'https://scontent.xx.fbcdn.net/m1/v/t6/An_UvxJXg9tdnLU3Y5qjPi0200MLilhzPXUgxzGjQzUMaNcmjdZA6anyrngvkdub33NZzZhd51fpCAEzNHFhko5aKRFP5fS1w_lKwYrzcNLupv27.png?_nc_eui2=AeH0Z9O-PPSBg9l8FeLeTyUHMiCX3WNpzi0yIJfdY2nOLeM4yQsYnDi7Fo-bVaW2oRmOKEYPCsTFZnVoJbmO2yOH&ccb=10-5&oh=00_AT-4ep4a5bI4Gf173sbCjcAhS7gahF9vcYuM9GaQwJsI9g&oe=6301E8F9&_nc_sid=55e238', 'comment_reactors': [{'name': 'Marcos Alexandre de Souza', 'link': 'https://facebook.com/senseimarcos?fref=pb', 'type': 'like'}], 'comment_reactions': {'like': 1}, 'comment_reaction_count': 1}]
print([reply['comment_text'] for reply in p]) # ['Concordo!', 'Concordo Marcão a única observação que faço é: a justiça deveria funcionar sempre dessa forma rápida e precisa, como neste caso.']

Forming a list of dicts in a loop according to if conditions

I need your help.
I have this code:
import ipaddress
from ipaddress import IPv4Network
prefixes = []
ip_addresses_all = [{'address': '10.0.0.1/24', 'vrf': {'id': 31,'name': 'god_inet'},
{'address': '10.0.0.10/24', 'vrf': {'id': 33, 'name': 'for_test'},
{'address': '10.1.1.1/30', 'vrf': {'id': 8, 'name': 'ott_private_net'},
{'address': '10.1.1.2/30', 'vrf': {'id': 11,'name': 'ott_public_net'},
{'address': '10.10.0.129/30', 'vrf': None,},
{'address': '10.10.0.130/30', 'vrf': None,},
{'address': '10.10.0.137/30', 'vrf': None,},
{'address': '10.10.0.138/30', 'vrf': None,}]
for ip in ip_addresses_all:
prefix = str(ipaddress.ip_network(ip.address, False))
mask_length = int(IPv4Network(prefix).prefixlen)
description_interface = ip.address
if ip.vrf:
ip_vrf_name = ip.vrf.name
ip_vrf_id = ip.vrf.id
ip_vrf = ip.vrf
else:
ip_vrf_name = 'null'
ip_vrf_id = 'null'
ip_vrf = 'null'
prefix_dict = {'prefix': prefix,
'vrf': {'name': ip_vrf_name,
'id': ip_vrf_id},
'prefix_description': [description_interface]}
if prefixes:
for i in prefixes:
if prefix != i['prefix']:
prefixes.append(prefix_dict)
elif i['prefix'] == prefix and i['vrf']['name'] == ip_vrf_name and description_interface not in i['prefix_description']:
i['prefix_description'].append(description_interface)
else:
prefixes.append(prefix_dict)
pprint(prefixes)
So, I wanna append the 'prefixes' list with dicts according to this logic : if prefix not in prefixes = [] - create new dict , if prefix with the same vrf already exists, then append a description string to a description key and update existing dict.
I struggle with this for 8 hours and it doesnt work, Ive got infinite loops=)
Intended output, something like that:
{'10.0.0.0/24': {'prefix_description': ['10.0.0.10/24'],'vrf': {'id': 31, 'name': 'god_inet'}},
'10.0.0.0/24': {'prefix_description': ['10.0.0.10/24'],'vrf': {'id': 33, 'name': 'for_test'}},
'10.1.1.0/30': {'prefix_description': ['10.1.1.2/30'],'vrf': {'id': 8, 'name': 'ott_private_net'}},
'10.1.1.0/30': {'prefix_description': ['10.1.1.2/30'],'vrf': {'id': 11, 'name': 'ott_public_net'}},
'10.10.0.128/30': {'prefix_description': ['10.10.0.129/30',
'10.10.0.130/30'],'vrf': {'id': 'null', 'name': 'null'}},
'10.10.0.136/30': {'prefix_description': ['10.10.0.137/30',
'10.10.0.138/30'],'vrf': {'id': 'null', 'name': 'null'}}}
Or better in a list like that:
[{'prefix': '10.0.0.0/24',
'prefix_description': ['77-GOD-VPN-2 ---- Vlan40'],
'vrf': {'id': 31, 'name': 'god_inet'}},
{'prefix': '10.0.0.0/24',
'prefix_description': ['78-ELS-CORE ---- Vlan142'],
'vrf': {'id': 33, 'name': 'for_test'}}]
make prefixes a dict and handle that code in the for loop
import ipaddress
from ipaddress import IPv4Network
prefixes = {}
ip_addresses_all = [{'address': '10.0.0.1/24', 'vrf': {'id': 31,'name': 'god_inet'}},
{'address': '10.0.0.10/24', 'vrf': {'id': 33, 'name': 'for_test'}},
{'address': '10.1.1.1/30', 'vrf': {'id': 8, 'name': 'ott_private_net'}},
{'address': '10.1.1.2/30', 'vrf': {'id': 11,'name': 'ott_public_net'}},
{'address': '10.10.0.129/30', 'vrf': None,},
{'address': '10.10.0.130/30', 'vrf': None,},
{'address': '10.10.0.137/30', 'vrf': None,},
{'address': '10.10.0.138/30', 'vrf': None,}]
for ip in ip_addresses_all:
prefix = str(ipaddress.ip_network(ip['address'], False))
mask_length = int(IPv4Network(prefix).prefixlen)
description_interface = ip['address']
if ip['vrf']:
ip_vrf_name = ip['vrf']['name']
ip_vrf_id = ip['vrf']['id']
ip_vrf = ip['vrf']
else:
ip_vrf_name = 'null'
ip_vrf_id = 'null'
ip_vrf = 'null'
prefix_dict = {
'vrf': {'name': ip_vrf_name,
'id': ip_vrf_id},
'prefix_description': [description_interface]}
if prefix in prefixes and \
prefixes[prefix]['vrf']['name'] == ip_vrf_name and \
description_interface not in prefixes[prefix]['prefix_description']:
prefixes[prefix]['prefix_description'].append(description_interface)
else:
prefixes[prefix] = prefix_dict
from pprint import pprint
pprint(prefixes)
output
{'10.0.0.0/24': {'prefix_description': ['10.0.0.10/24'],
'vrf': {'id': 33, 'name': 'for_test'}},
'10.1.1.0/30': {'prefix_description': ['10.1.1.2/30'],
'vrf': {'id': 11, 'name': 'ott_public_net'}},
'10.10.0.128/30': {'prefix_description': ['10.10.0.129/30', '10.10.0.130/30'],
'vrf': {'id': 'null', 'name': 'null'}},
'10.10.0.136/30': {'prefix_description': ['10.10.0.137/30', '10.10.0.138/30'],
'vrf': {'id': 'null', 'name': 'null'}}}

password generator with logging

import random, logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s:%(levelname)s:%(message)s')
file_handler = logging.FileHandler('student.log')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
mylist = ['Aa', 'Bb', 'Cc', 'Dd', 'Ee', 'Ff', 'Gg', 'Hh', 'Ii', 'Jj', 'Kk', 'Ll', 'Mm', 'Nn',
'Oo', 'Pp', 'Qq', 'Rr', 'Ss', 'Tt', 'Uu', 'Vv', 'Ww', 'Xx', 'Yy', 'Zz', '1', '2', '3', '4', '5', '6', '7', '8',
'9', '0', '!', '#', '#', '$', '%', '^', '&', '*', '~']
def generatePassword(num):
password = ''
for x in range(mylist):
return password
logging.debug(generatePassword,16)
When I execute the code, complier says that x is an unused variable. Is there a way to fix this? Also, is there any error with how I wrote the logging functions?
You are currently not using x inside your loop, hence the unused variable warning.
Regardless, consider using random.choices if you want to allow the password to possibly contain the same character twice or or random.sample if you don't:
import random
def generate_password(length, unique_chars_ignore_case=False):
my_list = [
'Aa', 'Bb', 'Cc', 'Dd', 'Ee', 'Ff', 'Gg', 'Hh', 'Ii', 'Jj', 'Kk', 'Ll',
'Mm', 'Nn', 'Oo', 'Pp', 'Qq', 'Rr', 'Ss', 'Tt', 'Uu', 'Vv', 'Ww', 'Xx',
'Yy', 'Zz', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0', '!', '#',
'#', '$', '%', '^', '&', '*', '~'
]
random_func = random.choices if not unique_chars_ignore_case else random.sample
return ''.join([
x if len(x) == 1 else x[random.randint(0, 1)]
for x in random_func(my_list, k=length)
])
Example Usage allows repeats:
>>> generate_password(6)
C9#cs2
Example Usage only unique characters ignore case:
>>> generate_password(6, unique_chars_ignore_case=True))
k*065#

how to convert data to standard json

messages=%5B%7B%22values%22%3A+%7B%22momentum%22%3A+%220.00%22%7D%2C+%22exchange%22%3A+%22binance%22%2C+%22market%22%3A+%22BNT%2FETH%22%2C+%22base_currency%22%3A+%22BNT%22%2C+%22quote_currency%22%3A+%22ETH%22%2C+%22indicator%22%3A+%22momentum%22%2C+%22indicator_number%22%3A+0%2C+%22analysis%22%3A+%7B%22config%22%3A+%7B%22enabled%22%3A+true%2C+%22alert_enabled%22%3A+true%2C+%22alert_frequency%22%3A+%22once%22%2C+%22signal%22%3A+%5B%22momentum%22%5D%2C+%22hot%22%3A+0%2C+%22cold%22%3A+0%2C+%22candle_period%22%3A+%224h%22%2C+%22period_count%22%3A+10%7D%2C+%22status%22%3A+%22hot%22%7D%2C+%22status%22%3A+%22hot%22%2C+%22last_status%22%3A+%22hot%22%2C+%22prices%22%3A+%22+Open%3A+0.000989+High%3A+0.000998+Low%3A+0.000980+Close%3A+0.000998%22%2C+%22lrsi%22%3A+%22%22%2C+%22creation_date%22%3A+%222020-05-10+16%3A16%3A23%22%2C+%22hot_cold_label%22%3A+%22%22%2C+%22indicator_label%22%3A+%22%22%2C+%22price_value%22%3A+%7B%22open%22%3A+0.000989%2C+%22high%22%3A+0.000998%2C+%22low%22%3A+0.00098%2C+%22close%22%3A+0.000998%7D%2C+%22decimal_format%22%3A+%22%25.6f%22%7D%2C+%7B%22values%22%3A+%7B%22leading_span_a%22%3A+%220.00%22%2C+%22leading_span_b%22%3A+%220.00%22%7D%2C+%22exchange%22%3A+%22binance%22%2C+%22market%22%3A+%22BNT%2FETH%22%2C+%22base_currency%22%3A+%22BNT%22%2C+%22quote_currency%22%3A+%22ETH%22%2C+%22indicator%22%3A+%22ichimoku%22%2C+%22indicator_number%22%3A+1%2C+%22analysis%22%3A+%7B%22config%22%3A+%7B%22enabled%22%3A+true%2C+%22alert_enabled%22%3A+true%2C+%22alert_frequency%22%3A+%22once%22%2C+%22signal%22%3A+%5B%22leading_span_a%22%2C+%22leading_span_b%22%5D%2C+%22hot%22%3A+true%2C+%22cold%22%3A+true%2C+%22candle_period%22%3A+%224h%22%2C+%22hot_label%22%3A+%22Bullish+Alert%22%2C+%22cold_label%22%3A+%22Bearish+Alert%22%2C+%22indicator_label%22%3A+%22ICHIMOKU+4+hr%22%2C+%22mute_cold%22%3A+false%7D%2C+%22status%22%3A+%22cold%22%7D%2C+%22status%22%3A+%22cold%22%2C+%22last_status%22%3A+%22cold%22%2C+%22prices%22%3A+%22+Open%3A+0.000989+High%3A+0.000998+Low%3A+0.000980+Close%3A+0.000998%22%2C+%22lrsi%22%3A+%22%22%2C+%22creation_date%22%3A+%222020-05-10+16%3A16%3A23%22%2C+%22hot_cold_label%22%3A+%22Bearish+Alert%22%2C+%22indicator_label%22%3A+%22ICHIMOKU+4+hr%22%2C+%22price_value%22%3A+%7B%22open%22%3A+0.000989%2C+%22high%22%3A+0.000998%2C+%22low%22%3A+0.00098%2C+%22close%22%3A+0.000998%7D%2C+%22decimal_format%22%3A+%22%25.6f%22%7D%2C+%7B%22values%22%3A+%7B%22bbp%22%3A+%220.96%22%2C+%22mfi%22%3A+%2298.05%22%7D%2C+%22exchange%22%3A+%22binance%22%2C+%22market%22%3A+%22BNT%2FETH%22%2C+%22base_currency%22%3A+%22BNT%22%2C+%22quote_currency%22%3A+%22ETH%22%2C+%22indicator%22%3A+%22bbp%22%2C+%22indicator_number%22%3A+1%2C+%22analysis%22%3A+%7B%22config%22%3A+%7B%22enabled%22%3A+true%2C+%22alert_enabled%22%3A+true%2C+%22alert_frequency%22%3A+%22once%22%2C+%22candle_period%22%3A+%224h%22%2C+%22period_count%22%3A+20%2C+%22hot%22%3A+0.09%2C+%22cold%22%3A+0.8%2C+%22std_dev%22%3A+2%2C+%22signal%22%3A+%5B%22bbp%22%2C+%22mfi%22%5D%2C+%22hot_label%22%3A+%22Lower+Band%22%2C+%22cold_label%22%3A+%22Upper+Band+BB%22%2C+%22indicator_label%22%3A+%22Bollinger+4+hr%22%2C+%22mute_cold%22%3A+false%7D%2C+%22status%22%3A+%22cold%22%7D%2C+%22status%22%3A+%22cold%22%2C+%22last_status%22%3A+%22cold%22%2C+%22prices%22%3A+%22+Open%3A+0.000989+High%3A+0.000998+Low%3A+0.000980+Close%3A+0.000998%22%2C+%22lrsi%22%3A+%22%22%2C+%22creation_date%22%3A+%222020-05-10+16%3A16%3A23%22%2C+%22hot_cold_label%22%3A+%22Upper+Band+BB%22%2C+%22indicator_label%22%3A+%22Bollinger+4+hr%22%2C+%22price_value%22%3A+%7B%22open%22%3A+0.000989%2C+%22high%22%3A+0.000998%2C+%22low%22%3A+0.00098%2C+%22close%22%3A+0.000998%7D%2C+%22decimal_format%22%3A+%22%25.6f%22%7D%5D
i need to convert this data in python3 to standard json for post json api
any solution ?
thanks
That looks like it's been URL form encoded.
Try
import urllib.parse
import json
# note **without** the message= part
stuff = "%5B%7B%22values%22%3A+%7B%22momentum%22%3A+%220.00%22%7D%2C+%22exchange%22%3A+%22binance%22%2C+%22market%22%3A+%22BNT%2FETH%22%2C+%22base_currency%22%3A+%22BNT%22%2C+%22quote_currency%22%3A+%22ETH%22%2C+%22indicator%22%3A+%22momentum%22%2C+%22indicator_number%22%3A+0%2C+%22analysis%22%3A+%7B%22config%22%3A+%7B%22enabled%22%3A+true%2C+%22alert_enabled%22%3A+true%2C+%22alert_frequency%22%3A+%22once%22%2C+%22signal%22%3A+%5B%22momentum%22%5D%2C+%22hot%22%3A+0%2C+%22cold%22%3A+0%2C+%22candle_period%22%3A+%224h%22%2C+%22period_count%22%3A+10%7D%2C+%22status%22%3A+%22hot%22%7D%2C+%22status%22%3A+%22hot%22%2C+%22last_status%22%3A+%22hot%22%2C+%22prices%22%3A+%22+Open%3A+0.000989+High%3A+0.000998+Low%3A+0.000980+Close%3A+0.000998%22%2C+%22lrsi%22%3A+%22%22%2C+%22creation_date%22%3A+%222020-05-10+16%3A16%3A23%22%2C+%22hot_cold_label%22%3A+%22%22%2C+%22indicator_label%22%3A+%22%22%2C+%22price_value%22%3A+%7B%22open%22%3A+0.000989%2C+%22high%22%3A+0.000998%2C+%22low%22%3A+0.00098%2C+%22close%22%3A+0.000998%7D%2C+%22decimal_format%22%3A+%22%25.6f%22%7D%2C+%7B%22values%22%3A+%7B%22leading_span_a%22%3A+%220.00%22%2C+%22leading_span_b%22%3A+%220.00%22%7D%2C+%22exchange%22%3A+%22binance%22%2C+%22market%22%3A+%22BNT%2FETH%22%2C+%22base_currency%22%3A+%22BNT%22%2C+%22quote_currency%22%3A+%22ETH%22%2C+%22indicator%22%3A+%22ichimoku%22%2C+%22indicator_number%22%3A+1%2C+%22analysis%22%3A+%7B%22config%22%3A+%7B%22enabled%22%3A+true%2C+%22alert_enabled%22%3A+true%2C+%22alert_frequency%22%3A+%22once%22%2C+%22signal%22%3A+%5B%22leading_span_a%22%2C+%22leading_span_b%22%5D%2C+%22hot%22%3A+true%2C+%22cold%22%3A+true%2C+%22candle_period%22%3A+%224h%22%2C+%22hot_label%22%3A+%22Bullish+Alert%22%2C+%22cold_label%22%3A+%22Bearish+Alert%22%2C+%22indicator_label%22%3A+%22ICHIMOKU+4+hr%22%2C+%22mute_cold%22%3A+false%7D%2C+%22status%22%3A+%22cold%22%7D%2C+%22status%22%3A+%22cold%22%2C+%22last_status%22%3A+%22cold%22%2C+%22prices%22%3A+%22+Open%3A+0.000989+High%3A+0.000998+Low%3A+0.000980+Close%3A+0.000998%22%2C+%22lrsi%22%3A+%22%22%2C+%22creation_date%22%3A+%222020-05-10+16%3A16%3A23%22%2C+%22hot_cold_label%22%3A+%22Bearish+Alert%22%2C+%22indicator_label%22%3A+%22ICHIMOKU+4+hr%22%2C+%22price_value%22%3A+%7B%22open%22%3A+0.000989%2C+%22high%22%3A+0.000998%2C+%22low%22%3A+0.00098%2C+%22close%22%3A+0.000998%7D%2C+%22decimal_format%22%3A+%22%25.6f%22%7D%2C+%7B%22values%22%3A+%7B%22bbp%22%3A+%220.96%22%2C+%22mfi%22%3A+%2298.05%22%7D%2C+%22exchange%22%3A+%22binance%22%2C+%22market%22%3A+%22BNT%2FETH%22%2C+%22base_currency%22%3A+%22BNT%22%2C+%22quote_currency%22%3A+%22ETH%22%2C+%22indicator%22%3A+%22bbp%22%2C+%22indicator_number%22%3A+1%2C+%22analysis%22%3A+%7B%22config%22%3A+%7B%22enabled%22%3A+true%2C+%22alert_enabled%22%3A+true%2C+%22alert_frequency%22%3A+%22once%22%2C+%22candle_period%22%3A+%224h%22%2C+%22period_count%22%3A+20%2C+%22hot%22%3A+0.09%2C+%22cold%22%3A+0.8%2C+%22std_dev%22%3A+2%2C+%22signal%22%3A+%5B%22bbp%22%2C+%22mfi%22%5D%2C+%22hot_label%22%3A+%22Lower+Band%22%2C+%22cold_label%22%3A+%22Upper+Band+BB%22%2C+%22indicator_label%22%3A+%22Bollinger+4+hr%22%2C+%22mute_cold%22%3A+false%7D%2C+%22status%22%3A+%22cold%22%7D%2C+%22status%22%3A+%22cold%22%2C+%22last_status%22%3A+%22cold%22%2C+%22prices%22%3A+%22+Open%3A+0.000989+High%3A+0.000998+Low%3A+0.000980+Close%3A+0.000998%22%2C+%22lrsi%22%3A+%22%22%2C+%22creation_date%22%3A+%222020-05-10+16%3A16%3A23%22%2C+%22hot_cold_label%22%3A+%22Upper+Band+BB%22%2C+%22indicator_label%22%3A+%22Bollinger+4+hr%22%2C+%22price_value%22%3A+%7B%22open%22%3A+0.000989%2C+%22high%22%3A+0.000998%2C+%22low%22%3A+0.00098%2C+%22close%22%3A+0.000998%7D%2C+%22decimal_format%22%3A+%22%25.6f%22%7D%5D"
parsed = urllib.parse.unquote_plus(stuff) # <<< encoded form, get rid of +
as_json = json.loads(parsed)
print(as_json)
gives me
[{'values': {'momentum': '0.00'}, 'exchange': 'binance', 'market': 'BNT/ETH', 'base_currency': 'BNT', 'quote_currency': 'ETH', 'indicator': 'momentum', 'indicator_number': 0, 'analysis': {'config': {'enabled': True, 'alert_enabled': True, 'alert_frequency': 'once', 'signal': ['momentum'], 'hot': 0, 'cold': 0, 'candle_period': '4h', 'period_count': 10}, 'status': 'hot'}, 'status': 'hot', 'last_status': 'hot', 'prices': ' Open: 0.000989 High: 0.000998 Low: 0.000980 Close: 0.000998', 'lrsi': '', 'creation_date': '2020-05-10 16:16:23', 'hot_cold_label': '', 'indicator_label': '', 'price_value': {'open': 0.000989, 'high': 0.000998, 'low': 0.00098, 'close': 0.000998}, 'decimal_format': '%.6f'}, {'values': {'leading_span_a': '0.00', 'leading_span_b': '0.00'}, 'exchange': 'binance', 'market': 'BNT/ETH', 'base_currency': 'BNT', 'quote_currency': 'ETH', 'indicator': 'ichimoku', 'indicator_number': 1, 'analysis': {'config': {'enabled': True, 'alert_enabled': True, 'alert_frequency': 'once', 'signal': ['leading_span_a', 'leading_span_b'], 'hot': True, 'cold': True, 'candle_period': '4h', 'hot_label': 'Bullish Alert', 'cold_label': 'Bearish Alert', 'indicator_label': 'ICHIMOKU 4 hr', 'mute_cold': False}, 'status': 'cold'}, 'status': 'cold', 'last_status': 'cold', 'prices': ' Open: 0.000989 High: 0.000998 Low: 0.000980 Close: 0.000998', 'lrsi': '', 'creation_date': '2020-05-10 16:16:23', 'hot_cold_label': 'Bearish Alert', 'indicator_label': 'ICHIMOKU 4 hr', 'price_value': {'open': 0.000989, 'high': 0.000998, 'low': 0.00098, 'close': 0.000998}, 'decimal_format': '%.6f'}, {'values': {'bbp': '0.96', 'mfi': '98.05'}, 'exchange': 'binance', 'market': 'BNT/ETH', 'base_currency': 'BNT', 'quote_currency': 'ETH', 'indicator': 'bbp', 'indicator_number': 1, 'analysis': {'config': {'enabled': True, 'alert_enabled': True, 'alert_frequency': 'once', 'candle_period': '4h', 'period_count': 20, 'hot': 0.09, 'cold': 0.8, 'std_dev': 2, 'signal': ['bbp', 'mfi'], 'hot_label': 'Lower Band', 'cold_label': 'Upper Band BB', 'indicator_label': 'Bollinger 4 hr', 'mute_cold': False}, 'status': 'cold'}, 'status': 'cold', 'last_status': 'cold', 'prices': ' Open: 0.000989 High: 0.000998 Low: 0.000980 Close: 0.000998', 'lrsi': '', 'creation_date': '2020-05-10 16:16:23', 'hot_cold_label': 'Upper Band BB', 'indicator_label': 'Bollinger 4 hr', 'price_value': {'open': 0.000989, 'high': 0.000998, 'low': 0.00098, 'close': 0.000998}, 'decimal_format': '%.6f'}]
Whereas if you want a JSON string to POST somewhere, call as_string = json.dumps(parsed)

How do I remove an item in my dictionary?

I am trying to remove an item ("logs") from my dictionary using the del method.
this is my code:
del response.json() ["logs"]
print(response.json())
this is my JSON dictionary:
{'count': 19804,
'next': {'limit': 1, 'offset': 1},
'previous': None,
'results':
[{'id': '334455',
'custom_id': '112',
'company': 28,
'company_name': 'Sunshine and Flowers',
'delivery_address': '34 olive beach house, #01-22, 612345',
'delivery_timeslot': {'lower': '2019-12-06T10:00:00Z', 'upper': '2019-12-06T13:00:00Z', 'bounds': '[)'},
'sender_name': 'Edward Shine',
'sender_email': '',
'sender_contact': '91234567',
'removed': None,
'recipient_name': 'Mint Shine',
'recipient_contact': '91234567',
'notes': '',
'items': [{'id': 21668, 'name': 'Loose hair flowers', 'quantity': 1, 'metadata': {}, 'removed': None}, {'id': 21667, 'name': "Groom's Boutonniere", 'quantity': 1, 'metadata': {}, 'removed': None}, {'id': 21666, 'name': 'Bridal Bouquet', 'quantity': 1, 'metadata': {}, 'removed': None}],
'latitude': '1.28283838383642000000',
'longitude': '103.2828037266201000000',
'created': '2019-08-15T05:40:30.385467Z',
'updated': '2019-08-15T05:41:27.930110Z',
'status': 'pending',
'verbose_status': 'Pending',
'**logs**': [{'id': 334455, 'order': '50c402d8-7c76-45b5-b883-e2fb887a507e', 'order_custom_id': '112', 'order_delivery_address': '34 olive beach house, #01-22, 6123458', 'order_delivery_timeslot': {'lower': '2019-12-06T10:00:00Z', 'upper': '2019-12-06T13:00:00Z', 'bounds': '[)'}, 'message': 'Order was created.', 'failure_reason': None, 'success_code': None, 'success_description': None, 'created': '2019-08-15T05:40:30.431790Z', 'removed': None}, {'id': 334455, 'order': '50c402d8-7c76-45b5-b883-e2fb887a507e', 'order_custom_id': '112', 'order_delivery_address': '34 olive beach house, #01-22, 612345', 'order_delivery_timeslot': {'lower': '2019-12-06T10:00:00Z', 'upper': '2019-12-06T13:00:00Z', 'bounds': '[)'}, 'message': 'Order is pending.', 'failure_reason': None, 'success_code': None, 'success_description': None, 'created': '2019-08-15T05:40:30.433139Z', 'removed': None}],
'reschedule_requests': [],
'signature': None}]
but it is saying this error
KeyError: 'logs'
what am i doing wrong? please assist
Every time you call response.json(), it returns a new dict, so the key you delete from response.json() won't be reflected in the next call to response.json().
You should instead save the returning value of response.json() to a variable before deleting the desired key:
data = response.json()
del data['results'][0]['logs']
print(data)

Resources