Trying to export a file from gsheet using python and I encountered this issue, may I know what I need to add on my script?
{
"error": {
"errors": [
{
"domain": "global",
"reason": "exportSizeLimitExceeded",
"message": "This file is too large to be exported."
}
],
"code": 403,
"message": "This file is too large to be exported."
}
}
Heres my script when I got the error
path = "/root/xlsx/"
prefix = "Remastered"
suffix = datetime.datetime.now().strftime("%Y%m%d")
filename = prefix + "_" + suffix + ".xlsx"
v = os.path.join(path, filename)
print(v)
with open(v, 'wb') as f:
f.write(res.content)
Please modify url and test it again.
From:
url = "https://www.googleapis.com/drive/v3/files/" + "sdfasdasdasdasdadsa_1231_123" + "/export?mimeType=application%2Fvnd.openxmlformats-officedocument.spreadsheetml.sheet"
To:
url = "https://docs.google.com/spreadsheets/export?exportFormat=xlsx&id=###YourSpreadsheetId###"
Related
I am copying a file from one drive to another. As part of the body request, I am also providing conflictBehavior as rename (tried with replace as well) but the copy is failing.
POST: https://graph.microsoft.com/beta/users/{user-id}/drive/items/{item-id}/copy
Body:
{
"parentReference": {"id": {folder-id-to-copy}, "driveId": {drive-id},
"#microsoft.graph.conflictBehavior": "rename"
}
After executing above command, as expected I get a 202 and in the header I look at Location. When querying the monitor URL, I see the below error:
{
"#odata.context": "https://{host-name}/_api/v2.1/$metadata#drives('default')/operations/$entity",
"id": "7a0decd4-df2f-4717-8eee-b7c2cd131009",
"createdDateTime": "0001-01-01T00:00:00Z",
"lastActionDateTime": "0001-01-01T00:00:00Z",
"status": "failed",
"error": {
"code": "nameAlreadyExists",
"message": "Name already exists"
}
}
What to pass in order to rename/replace existing file while copying
If you are trying to rename it with special name, then try this.
POST /users/{user-id}/drive/items/{item-id}/copy
Content-Type: application/json
{
"parentReference": {
"id": {folder-id-to-copy}, "driveId": {drive-id},
},
"name": "your_file_name (copy).txt"
}
Reference here: https://learn.microsoft.com/en-us/graph/api/driveitem-copy?...
And if you want to rename the file automatically, then try this using Instance Attributes.
POST /users/{user-id}/drive/items/{item-id}/copy?#microsoft.graph.conflictBehavior=rename
Content-Type: application/json
{
"name": "{filename}"
}
name should be provided.
if you are using GraphServiceClient, you can do the following:
GraphServiceClient graphClient = new GraphServiceClient( authProvider );
var parentReference = new ItemReference
{
DriveId = "6F7D00BF-FC4D-4E62-9769-6AEA81F3A21B",
Id = "DCD0D3AD-8989-4F23-A5A2-2C086050513F"
};
// To resolve the issue: Code: nameAlreadyExists Message: The specified item name already exists. Copy
List<QueryOption> options = new List<QueryOption>
{
new QueryOption("#microsoft.graph.conflictBehavior", "rename")
};
var name = "contoso plan (copy).txt";
await graphClient.Me.Drive.Items["{driveItem-id}"]
.Copy(name,parentReference)
.Request(options)
.PostAsync();
Ref https://learn.microsoft.com/en-us/graph/api/driveitem-copy?view=graph-rest-1.0&tabs=csharp
I have been trying to implement the following procedure with python
https://learn.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_createuploadsession?view=odsp-graph-online#create-an-upload-session
I have been trying to get the upload URL part
Please note that I already got the access token and created the required client Id and secret
def getUploadUrl(filename="test.txt"):
global token
if (not token):
with open('token.json', 'r') as f:
token = json.load(f)
if (token["expires_at"] < time.time()):
refreshToken()
location = "/me/drive/root:/FolderA/" + filename + ":/createUploadSession"
client = OAuth2Session(client_id=client_id,
redirect_uri=REDIRECT_URI, token=token)
headers = {'Content-Type': 'application/json'}
json_file = {
"item": {"#odata.type": "microsoft.graph.driveItemUploadableProperties",
"#microsoft.graph.conflictBehavior": "replace",
"name": filename
}
}
json_string = json.dumps(json_file, indent=4)
r = client.post(BaseUrl + location,
data=json_string, headers=headers)
print(r.status_code)
print(r.text)
upload_url = ""
if(r.status_code == 200):
upload_url = r.json()['uploadUrl']
return upload_url, r
else:
return "", ""
I keep getting the following Error response though
{
"error": {
"code": "invalidRequest",
"message": "Invalid request",
"innerError": {
"request-id": "7893d0aa-fcdb-46bc-b0b6-58fd90c4cb46",
"date": "2020-03-21T17:15:13"
}
}
What I ended up doing till now is just removing
#odata.type": "microsoft.graph.driveItemUploadableProperties from the JSON payload.
Try to reload later, maybe it was the service bug.
I am new to Python. I am writhing a code to generate a excel file having the data sourced by calling API and correlate those to get desired result.
basically taking input from one database and search that in others and fetch related information.
The 4 databases have below data :
EEp
---------------------
{u'data': [{u'_id': u'5c30702c8ca9f51da8178df4',
u'encap': u'vlan-24',
u'ip': u'7.12.12.16',
u'mac': u'5B:P9:01:9E:42:08'}]}
PathEp
-----------
{u'data': [{u'_id': u'5c54a81a8ca9f51da84ae08e',
u'paths': u'paths-1507',
u'endpoint': u'eth1/10',
u'cep': u'5B:P9:01:9E:42:08',
u'tenant': u'ESX'}]}
ip4_address
-----------------------
{u'data': [{u'Allocation': u'Build_Reserved',
u'address': u'7.12.12.16',
u'name': u'fecitrix-1',
u'state': u'RESERVED'}]}
asset
---------------
{u'data': [{u'_id': u'57ccce8110dd54f02881fedc',
u'client': u'CES',
u'hostname': u'fecitrix-1'
u'os_team': u'Window'}]}
Logic:
If "mac" of EEp and "cep" of PathEp is same than take "encap","ip" ,"mac"
"paths" ,'endpoint","cep" and "tenant" (these values need to be exported
to excel)
Take ip of EEp and search in "ip4_address"
and get the "name" from ip4_address ( name need to be exported to excel).
If "name" of ip4_address is equal to "hostname" of database "asset" then take
"client" and "os_team" ( export that to excel)
I have written the script but not getting the desired result.
def get_host_details(self):
data = {
"find": {
"hostname": self.controller
},
"projection":{
"tenant": 1,
"paths": 1,
"endpoint":1
}
}
host_details = self.post("https://database.app.com/api/data/devices/PathEp/find", data)
#print host_details
hosts = []
for record in host_details:
if "mig" not in record["endpoint"]:
hosts.append(record)
return hosts
def get_ipaddress(self, controller):
host_record = {"tenant": "UNKNOWN",
"paths": "UNKNOWN",
"endpoint": "UNKNOWN",
"ip": "UNKNOWN",
"mac": "UNKNOWN",
"encap": "UNKNOWN"}
data = {
"find": {
"hostname": controller,
"ip": {
"$ne": "0.0.0.0"
}
},
"projection": {
"ip": 1,
"mac":1,
"encap":1,
}
}
endpoints = self.post("https://database.app.com/api/data/devices/EEp/find", data)
IPAM = self.get_dns()
print endpoints
host_details = self.get_host_details()
host_details_record = []
for record in endpoints:
for host in host_details:
if record["mac"] == host["cep"]:
host_record = {"tenant": host["tenant"],
"paths": host["paths"],
"endpoint": host["endpoint"],
"ip": record["ip"],
"mac": record["mac"],
"encap": record["encap"]}
host_details_record.append(host_record)
self.get_excel(host_details_record)
def get_dns(self, endpoints):
ip_dns_record = []
for each_endpoint in endpoints:
data = {
"find":
{
"address": {
"$eq": each_endpoint["ip"]
},
},
"projection":
{
"name": 1
}
}
dns_record = {"client":"UNKNOWN",
"os_team":"UNKNOWN",
ipam_record = self.post("https://database.app.com/api/data/"
"internal/ip4_address/find", data)
if ipam_record:
dns_record["ip_address"] = each_endpoint["ip"]
dns_record["hostname"] = ipam_record[0]["name"]
dns_record = self.get_remedy_details(ipam_record[0]["name"],
dns_record)
ip_dns_record.append(dns_record)
else:
dns_record["ip_address"] = each_endpoint["ip"]
dns_record["hostname"] = "UNKNOWN"
ip_dns_record.append(dns_record)
self.get_excel(ip_dns_record)
def get_remedy_details(self, hostname, dns_record):
data = {
"find":
{
"hostname": hostname.upper(),
}
}
remedy_data = self.post("https://database.app.com/api/data/internal/asset/find", data)
print(remedy_data)
#remedy_data = remedy_data["data"]
if remedy_data:
dns_record["client"] = remedy_data[0].get("client","UNKNOWN")
dns_record["os_team"] = remedy_data[0].get("os_team", "UNKNOWN")
else:
dns_record["client"] = "UNKNOWN"
dns_record["os_team"] = "UNKNOWN"
return dns_record
def get_excel(self, ip_dns_record):
filename = self.controller + ".xls"
excel_file = xlwt.Workbook()
sheet = excel_file.add_sheet('HOSTLIST')
sheet.write(0, 0, "IP Address")
sheet.write(0, 1, "HostName")
sheet.write(0, 2, "Client")
sheet.write(0, 3, "OS Team")
for count in xrange(1, len(ip_dns_record)+1):
sheet.write(count, 0,ip_dns_record[count - 1]["ip_address"])
sheet.write(count, 1,ip_dns_record[count - 1]["hostname"])
sheet.write(count, 2,ip_dns_record[count - 1]["client"])
sheet.write(count, 3,ip_dns_record[count - 1]["os_team"])
excel_file.save(filename)
if __name__ == "__main__":
controller = sys.argv[1]
OBJ = ACIHostList(controller)
print "SCRIPT COMPLETED"
No idea where I am going wrong and what needs to be done .
Your question leaves too much out. You should include all errors that you get. You should also comment your code as well so we can understand what you are trying to achieve in each step.
This is not an answer but something to try:
Rather than trying to wrap your head around a module like excel, wright your data to a simple CSV file. A CSV file can be opened up in excel and it formats correctly but is a lot easier to create.
import csv
data = [["a", "b"], ["c", "d"]]
with open("file.csv", "w+") as csv_file:
create_csv = csv.writer(csv_file)
create_csv .writerows(data)
simply grab all your data into a 2D list and using the above code dump it into a file so you can easily read it.
check the output of the file and see if you are getting the data you expect.
If you are not getting the desired data into this CSV file then there is an issue with your database queries.
I want to clean the remnant [Deleted video] of several playlist of my YouTube channel. I'm using this code but it doesn't work.
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
CLIENT_SECRETS_FILE = "client_secrets.json"
YOUTUBE_READ_WRITE_SCOPE = "https://www.googleapis.com/auth/youtube"
def get_authenticated_service(args):
flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE,
scope=YOUTUBE_READ_WRITE_SCOPE,
message=MISSING_CLIENT_SECRETS_MESSAGE)
storage = Storage("%s-oauth2.json" % sys.argv[0])
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run_flow(flow, storage, args)
return build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,
http=credentials.authorize(httplib2.Http()))
if __name__ == "__main__":
try:
args = argparser.parse_args()
youtube = get_authenticated_service(args)
youtube.playlistItems().delete(id="xxxxxxxxx").execute()
except HttpError as e:
print ("\nAn HTTP error %d occurred:\n%s" % (e.resp.status, e.content))
I get this error massage (403)(Forbidden)
The request is not properly authorized to delete the specified playlist item
{
"error": {
"errors": [
{
"domain": "youtube.playlistItem",
"reason": "playlistItemsNotAccessible",
"message": "Forbidden",
"locationType": "parameter",
"location": "id"
}
],
"code": 403,
"message": "Forbidden"
}
}
Even using this (Try this API) from here:
https://developers.google.com/youtube/v3/docs/playlistItems/delete?hl=en-419
or here
https://developers.google.com/youtube/v3/docs/playlistItems/delete?hl=es-419
My credentials, my developer Key and my client_secrets.json file are good, becouse i used it before and its works.
Someone knows what is happend? Or someone knows other way to remove "Deleted video" from playlist using Python + Youtube API v3?
The problem was solved:
If you execute PlaylistItems().list(), you get this response.
"items": [
{
"kind": "youtube#playlistItem",
"etag": "\"DuHzAJ-eQIiCIp7p4ldoVcVAOeY/Ktqi5NIapmys1w2V0FiorhFR-Uk\"",
"id": "UExES3pRck8tTUFDZndHV3Z0eXVaVHZXNENxNTNGYV9wNC4wMTcyMDhGQUE4NTIzM0Y5",
"snippet": {
"publishedAt": "2018-06-06T13:43:17.000Z",
"channelId": "xxxxxxxxxxxxxxxxxx",
"title": "Deleted video",
"description": "This video is unavailable.",
"channelTitle": "xxxxxxxxxxxxxxxxxx",
"playlistId": "xxxxxxxxxxxxxxxxxxxxxxx",
"position": 0,
"resourceId": {
"kind": "youtube#video",
"videoId": "D6NOeUfxCnM"
}
for delete items from playlist you must to use this
"id": "UExES3pRck8tTUFDZndHV3Z0eXVaVHZXNENxNTNGYV9wNC4wMTcyMDhGQUE4NTIzM0Y5",
if you use this "videoId": "D6NOeUfxCnM" you get the error massage (403)(Forbidden)
I am trying to make a program which generates a JSON file, based on the user's inputs.
The JSON file will then be used to create a python file, with a class generated using the JSON file (not shown here).
Once the inputs are submitted, you choose if you want to create a new instance in the JSON object. When you submit 'Y/y', you can create a new instance. Afterwards, if you submit 'N/n', I get the error.
import json
class GenerateSchema:
rounds = 0
# This list stores
instances = []
while True:
if len(instances) == 0:
schema_filename = input("*.json filename: ")
class_name = input("ClassName: ")
instance_name = input("First instance name: ")
instance_params = input("Instance parameter(s) ENTER for 'self' only: ")
instance_description = input("Instance description (OPTIONAL): ")
keep_generating = input("Generate another instance? [Y/N]: ")
if keep_generating == "Y" or keep_generating == "y":
# Appends user inputs to 'instances' list
instances.append([instance_name, instance_params, instance_description])
rounds += 1
else:
break
def generate_schema(self):
with open(self.schema_filename + ".json", "w") as schema_file:
# Writes the JSON object to the JSON file. This line gives me the error.
schema_file.write(json.dumps(self.return_schema()))
schema_file.close()
def return_schema(self):
# Returns a JSON object based on user inputs
return {
{
"class": {
"name": self.class_name,
"instance": {
"name": instance[0],
"parameters": "self" + instance[1],
"description": instance[2]
}
}
} for instance in self.instances
}
schema_gen = GenerateSchema()
schema_gen.generate_schema()
If you don't understand my code, please tell me. Any help or suggestions are wanted and appreciated. (I have already looked at other questions with the same error, but they don't give me a solution).
Thank-you.
Your method return_schema must return list of values to dump it into JSON.
def return_schema(self):
# Returns a JSON object based on user inputs
return [
{
"class": {
"name": self.class_name,
"instance": {
"name": instance[0],
"parameters": "self" + instance[1],
"description": instance[2]
}
}
} for instance in self.instances
]
I solved the problem, by looping through the instances list and creating a new nested dictionary depending on the instance's name. If you entered the same instance name twice, the second input would overwrite the first (I still need to require to have different instance names)
The JSON generated has more than 1 instance (depending on how many instances you submitted). What I did was creating a new nested dictionary for each instance using a for loop. The code is below:
import json
class GenerateSchema:
rounds = 0
instances = []
while True:
if len(instances) == 0:
schema_filename = input("*.json filename: ")
class_name = input("ClassName: ")
instance_name = input("Instance name: ")
instance_params = input("Instance parameter(s) ENTER for 'self' only: ")
instance_description = input("Instance description (OPTIONAL): ")
keep_generating = input("Generate another instance? [Y/N]: ")
if keep_generating == "Y" or keep_generating == "y":
instances.append([instance_name, instance_params, instance_description])
rounds += 1
elif keep_generating == "N" or keep_generating == "n":
instances.append([instance_name, instance_params, instance_description])
break
else:
print("[Y/N] please: ")
def generate_schema(self):
make_file = open(self.schema_filename + ".json", "w")
make_file.write('{}')
make_file.close()
with open(self.schema_filename + ".json", "r+") as f:
data = json.load(f)
data[self.class_name] = {
"name": self.class_name,
}
if not self.instance_params:
for instance in self.instances:
data[self.class_name][instance[0]] = {
"name": instance[0],
"parameters": "self",
"description": instance[2]
}
else:
for instance in self.instances:
data[self.class_name][instance[0]] = {
"name": instance[0],
"parameters": "self, " + instance[1],
"description": instance[2]
}
f.seek(0)
json.dump(data, f)
f.truncate()
Thank-you for your help.