Create an AzureBlobDatastore() with SDK-V2 - azure

I am trying to create an AzureBlobDatastore() via the azure-sdk-v2. Previously, I successfully managed to perform the same operation via azure-sdk-v1 (from tutorial link in the next paragraph).
I am following this tutorial : https://learn.microsoft.com/en-us/azure/machine-learning/migrate-to-v2-resource-datastore#create-a-datastore-from-an-azure-blob-container-via-account_key in order to set up/create my AzureBlobDatastore().
This is the code that I am using (like in the tutorial, only updating the parameters in MLClient.from_config() (if I don't use the credential parameter I get an error stating that the parameter is empty):
ml_client = MLClient.from_config(credential=DefaultAzureCredential(),
path="./workspace_config_sdk2.json")
store = AzureBlobDatastore(
name="azureml_sdk2_blob",
description="Datastore created with sdkv2",
account_name=storage_account_name,
container_name=container_name,
protocol="wasbs",
credentials={
"account_key": "..my_account_key.."
},
)
ml_client.create_or_update(store)
I get the following error:
AttributeError: 'dict' object has no attribute
'_to_datastore_rest_object'
Note that the workspace_config_sdk2.json config has the following scheme:
{
"subscription_id": "...",
"resource_group": "...",
"workspace_name": "..."
}
How can I solve this error?
EDIT: On investigating the issue, it seems that it falls back to some code in "azure\ai\ml\entities\_datastore\azure_storage.py"
175 def _to_rest_object(self) -> DatastoreData:
176 blob_ds = RestAzureBlobDatastore(
177 account_name=self.account_name,
178 container_name=self.container_name,
--> 179 credentials=self.credentials._to_datastore_rest_object(),
180 endpoint=self.endpoint,
181 protocol=self.protocol,
182 tags=self.tags,
183 description=self.description,
184 )
185 return DatastoreData(properties=blob_ds)
AttributeError: 'dict' object has no attribute '_to_datastore_rest_object'

Related

S4Hana(ERP) backend does not return localized error message due to additional ‘sap-language’ header is being added by the Cloud SDK

Note: Application is built in CAP Java Stack along with DWC framework. Technical user is configurated in destination service for making an API call.
Flow :
API call gets initiated from UI.
The application layer receives the
request and respond with 204
Then starts the async process
explicitly via spring events and adds custom header as ‘sap-language’
based on user's locale before making the call to S4 backend.
httpHeaders.put(DefaultErpHttpDestination.LOCALE_HEADER_NAME, dwcHeaderContainer.getHeader(HttpHeaders.ACCEPT_LANGUAGE));
ModificationResponse<PurchaseRequisition> s4ReqResponse =
s4opRequistionService.createPurchaseRequisition(s4opRequistion).withHeaders(httpHeaders).executeRequest(destinationProvider.getDestination());
However, Observed the underlying cloud SDK layer which 1st makes the call to destination service and retrieves the provided headers and then gets system current locale based on some constraint and adds a header again for ‘sap-language’ with the default system locale as 'en' which leads to problem in our case.
Calling stack trace :
Thread [SimpleAsyncTaskExecutor-60] (Suspended)
CdsRequestHeaderFacade.tryGetRequestHeaders() line: 28
RequestHeaderAccessor.tryGetHeaderContainer() line: 89
DefaultLocaleFacade.getLocalesByHeaders() line: 48
DefaultLocaleFacade.getCurrentLocales() line: 36
DefaultLocaleFacade(LocaleFacade).getCurrentLocale() line: 27
LocaleAccessor.getCurrentLocale() line: 53
371621826.get() line: not available
Option$None<T>(Option<T>).getOrElse(Supplier<? extends T>) line: 336
DefaultErpHttpDestination(ErpHttpDestinationProperties).getLocale() line: 51
DefaultErpHttpDestination.getHeadersToAdd() line: 188
DefaultErpHttpDestination.getHeaders(URI) line: 169
HttpClientWrapper.wrapRequest(HttpUriRequest) line: 98
HttpClientWrapper.execute(HttpUriRequest) line: 116
HttpClientWrapper.execute(HttpUriRequest) line: 35
DefaultCsrfTokenRetriever.retrieveCsrfTokenResponseHeader(HttpClient, String, Map<String,String>) line: 91
DefaultCsrfTokenRetriever.retrieveCsrfToken(HttpClient, String, Map<String,String>) line: 54
ODataRequestCreate(ODataRequestGeneric).lambda$tryGetCsrfToken$5ab307ff$1(CsrfTokenRetriever, HttpClient) line: 266
659039215.apply() line: not available
Try<T>.of(CheckedFunction0<? extends T>) line: 75
ODataRequestCreate(ODataRequestGeneric).tryGetCsrfToken(HttpClient, CsrfTokenRetriever) line: 266
ODataRequestCreate(ODataRequestGeneric).tryExecuteWithCsrfToken(HttpClient, Supplier<HttpResponse>) line: 239
ODataRequestCreate.execute(HttpClient) line: 93
PurchaseRequisitionCreateFluentHelper(FluentHelperCreate<FluentHelperT,EntityT>).executeRequest(HttpDestinationProperties) line: 246
xxxxxx.copyPR(Map<String,Object>) line: 176
xxxxxx.s4AdapterResponseToxxxxxx(xxxxxx) line: 69
xxxxxx.onApplicationEvent(xxxxxx) line: 58
xxxxxxSpringListener.onApplicationEvent(ApplicationEvent) line: 1
SimpleApplicationEventMulticaster.doInvokeListener(ApplicationListener, ApplicationEvent)
SimpleApplicationEventMulticaster.invokeListener(ApplicationListener<?>, ApplicationEvent)
SimpleApplicationEventMulticaster.lambda$multicastEvent$0(ApplicationListener, ApplicationEvent)
1824178544.run() line: not available
DwcContextTaskDecorator.lambda$decorate$0(Map, Runnable) line: 33
1126784716.run() line: not available
Thread.run() line: 829
Sample Http Out Going request and response looks like :
http-outgoing-13 >> "POST /sap/opu/odata/sap/API_PURCHASEREQ_PROCESS_SRV/A_PurchaseRequisitionHeader HTTP/1.1[\r][\n]"
http-outgoing-13 >> "sap-language: de[\r][\n]"
http-outgoing-13 >> "Accept: application/json[\r][\n]"
http-outgoing-13 >> "RequestID: xxxxx[\r][\n]"
http-outgoing-13 >> "RepeatabilityCreation: 2022-09-19T08:50:00.889315900Z[\r][\n]"
http-outgoing-13 >> "X-CorrelationID: [\r][\n]"
http-outgoing-13 >> "x-csrf-token: xxxxx==[\r][\n]"
http-outgoing-13 >> "Content-Type: application/json[\r][\n]"
http-outgoing-13 >> "Authorization: Basic XXXX[\r][\n]"
http-outgoing-13 >> "sap-language: en[\r][\n]"
http-outgoing-13 >> "Content-Length: 1383[\r][\n]"
http-outgoing-13 >> "Host: XXXX[\r][\n]"
http-outgoing-13 >> "Connection: Keep-Alive[\r][\n]"
http-outgoing-13 >> "User-Agent: Apache-HttpClient/4.5.13 (Java/11.0.16)[\r][\n]"
http-outgoing-13 >> "Cookie: XXX%3d; sap-usercontext=sap-language=en&sap-client=xxx[\r][\n]"
http-outgoing-13 >> "Accept-Encoding: gzip,deflate[\r][\n]"
http-outgoing-13 >> "[\r][\n]"
http-outgoing-13 >> "body_xxx"
http-outgoing-13 << "HTTP/1.1 400 Bad Request[\r][\n]"
http-outgoing-13 << {"error":{"code":"06/101","message":{"lang":"en","value":"No master record exists for supplier EPRINT"}"}
If you notice, there are two 'sap-language' header goes to backend as mentioned above and S4 backend layer considers the last header attribute and returns the error message always in English. Our expectation is error messages should come based on user’s locale…
Following queries :
Is there any way by which application layer can instruct to Cloud SDK like Not to append again 'sap-language' header and give preference to what is being passed by the custom header as ‘sap-language’ As ‘de’ in our case?
Is it the bug from Cloud SDK?
Any recommendation or inputs to address this issue..
Answers
Is it the bug from Cloud SDK?
Not really, to me this looks like an incorrect API usage:
Your Destination is of type DefaultErpHttpDestination. It is significantly different from regular destination types, because it automatically adds headers sap-client and sap-language. It seems like the class cannot resolve the request headers at runtime, therefore falls back to system default en.
You are adding headers via OData API, while technically possible it's unreasonable for your actual call. As it results in duplicate header entry.
Is there any way for Cloud SDK not to append again 'sap-language' header?
Yes, do not cast to ErpHttpDestination explicitly.
Recommendation
Instead of fiddling with the destination or odata request, I would suggest to use the following code:
ModificationResponse<PurchaseRequisition> s4ReqResponse =
RequestHeaderAccessor.executeWithHeaderContainer(
dwcHeaderContainer.getHeaders(),
() -> s4opRequistionService
.createPurchaseRequisition(s4opRequistion)
.executeRequest(destinationProvider.getDestination()));
Reason:
Keep benefits of ErpHttpDestination
Enable correct locale resolution by leveraging the request headers and establishing a ThreadContext.
#Alexander, Tried with the following code, however did not work as expected.
Could you plz check once and suggest if i am missing something or anything w.r.t version which needs to be upgraded in order to adopt the recommendation
ModificationResponse<PurchaseRequisition> s4ReqResponse =
RequestHeaderAccessor.executeWithHeaderContainer(dwcHeaderContainer.getHeaders(), () -> {
ModificationResponse<PurchaseRequisition> s4Response = null;¬
try {
s4Response = s4opRequistionService.createPurchaseRequisition(s4opRequistion)
.executeRequest(destinationProvider.getDestination());
return s4Response;
} catch (ODataException e) {
logger.error(EXCEPTION_FROM_COPYPR);
logAndSendErrorMessageToCore(e, requisition);
}
return s4Response;
}
);
Also ensured dwcHeaderContainer consists the values as such {accept-language=de} along with few other attributes.
Still stack trace looks similar and could not found a place underneath where it picks the header value from the supplied header values via RequestHeaderAccessor. executeWithHeaderContainer()
StackTrace:
Thread [SimpleAsyncTaskExecutor-41] (Suspended)
CdsRequestHeaderFacade.tryGetRequestHeaders() line: 28
RequestHeaderAccessor.tryGetHeaderContainer() line: 89
DefaultLocaleFacade.getLocalesByHeaders() line: 48
DefaultLocaleFacade.getCurrentLocales() line: 36
DefaultLocaleFacade(LocaleFacade).getCurrentLocale() line: 27
LocaleAccessor.getCurrentLocale() line: 53
2065479632.get() line: not available
Option$None<T>(Option<T>).getOrElse(Supplier<? extends T>) line: 336
DefaultErpHttpDestination(ErpHttpDestinationProperties).getLocale() line: 51
DefaultErpHttpDestination.getHeadersToAdd() line: 188
DefaultErpHttpDestination.getHeaders(URI) line: 169
HttpClientWrapper.wrapRequest(HttpUriRequest) line: 98
HttpClientWrapper.execute(HttpUriRequest) line: 116
HttpClientWrapper.execute(HttpUriRequest) line: 35
DefaultCsrfTokenRetriever.retrieveCsrfTokenResponseHeader(HttpClient, String, Map<String,String>) line: 91
DefaultCsrfTokenRetriever.retrieveCsrfToken(HttpClient, String, Map<String,String>) line: 54
ODataRequestCreate(ODataRequestGeneric).lambda$tryGetCsrfToken$5ab307ff$1(CsrfTokenRetriever, HttpClient) line: 266
1607932978.apply() line: not available
Try<T>.of(CheckedFunction0<? extends T>) line: 75
ODataRequestCreate(ODataRequestGeneric).tryGetCsrfToken(HttpClient, CsrfTokenRetriever) line: 266
ODataRequestCreate(ODataRequestGeneric).tryExecuteWithCsrfToken(HttpClient, Supplier<HttpResponse>) line: 239
ODataRequestCreate.execute(HttpClient) line: 93
PurchaseRequisitionCreateFluentHelper(FluentHelperCreate<FluentHelperT,EntityT>).executeRequest(HttpDestinationProperties) line: 246
S4Adapter.lambda$0(PurchaseRequisition, Map) line: 177
64962500.call() line: not available
ThreadContextCallable<T>.call() line: 229
ThreadContextExecutor(AbstractThreadContextExecutor<ExecutorT>).execute(Callable<T>) line: 320
RequestHeaderAccessor.executeWithHeaderContainer(RequestHeaderContainer, Callable<T>) line: 185
RequestHeaderAccessor.executeWithHeaderContainer(Map<String,String>, Callable<T>) line: 160
S4Adapter.copyPR(Map<String,Object>) line: 173

Databricks DataLakeFileClient Returns Error

I have a databricks notebook running every 5 mins, part of the functionality is to connect to a file in Azure Data Lake Storage Gen2 (ADLS Gen2).
I get the following error in the code, but it seems to have "come out of nowhere" as the process was previously working fine. the "file = " part is written by me, all the parameters are as expected and matching the correct file names/containers and do exist in the data lake.
---> 92 file = DataLakeFileClient.from_connection_string("DefaultEndpointsProtocol=https;AccountName="+storage_account_name+";AccountKey=" + storage_account_access_key,
93 file_system_name=azure_container, file_path=location_to_write)
94
/databricks/python/lib/python3.8/site-packages/azure/storage/filedatalake/_data_lake_file_client.py in from_connection_string(cls, conn_str, file_system_name, file_path, credential, **kwargs)
116 :rtype ~azure.storage.filedatalake.DataLakeFileClient
117 """
--> 118 account_url, _, credential = parse_connection_str(conn_str, credential, 'dfs')
119 return cls(
120 account_url, file_system_name=file_system_name, file_path=file_path,
/databricks/python/lib/python3.8/site-packages/azure/storage/filedatalake/_shared/base_client.py in parse_connection_str(conn_str, credential, service)
402 if service == "dfs":
403 primary = primary.replace(".blob.", ".dfs.")
--> 404 secondary = secondary.replace(".blob.", ".dfs.")
405 return primary, secondary, credential
Any thoughts/help? The actual error is in the base_client.py code, but I don't even know what "secondary" is supposed to be and why there would be an error there.
For some reason, after restarting the cluster, something changed and the following "endpoint suffix" was required for this to continue working, couldn't find any docs on why it would work without this, but until a few days ago, it had always worked:
"DefaultEndpointsProtocol=https;AccountName="+storage_account_name+";AccountKey="+storage_account_access_key+";EndpointSuffix=core.windows.net"

WebDriverException: Message: unknown error: bad inspector message error while printing HTML content using ChromeDriver Chrome through Selenium Python

I am scraping some HTML content..
for i, c in enumerate(cards[75:77]):
print(i)
a = c.find_element_by_class_name("influencer-stagename")
print(a.get_attribute('innerHTML'))
Works fine for all records except the 76th one. Output before error...
0
b'<a class="influencer-analytics-link" href="/influencers/sophiewilling"><h5><span>SOPHIE WILLING</span></h5></a>'
1
b'<a class="influencer-analytics-link" href="/influencers/ferntaylorr"><h5><span>Fern Taylor.</span></h5></a>'
2
b'<a class="influencer-analytics-link" href="/influencers/officialshaniceslatter"><h5><span>Shanice Slatter</span></h5></a>'
3
Stacktrace...
> -------------------------------------------------------------------------
WebDriverException Traceback (most recent call last) <ipython-input-484-0a80d1af1568> in <module>
3 #print(c.find_element_by_class_name("influencer-stagename").text)
4 a = c.find_element_by_class_name("influencer-stagename")
----> 5 print(a.get_attribute('innerHTML').encode('ascii', 'ignore'))
~/anaconda3/envs/py3-env/lib/python3.7/site-packages/selenium/webdriver/remote/webelement.py in get_attribute(self, name)
141 self, name)
142 else:
--> 143 resp = self._execute(Command.GET_ELEMENT_ATTRIBUTE, {'name': name})
144 attributeValue = resp.get('value')
145 if attributeValue is not None:
~/anaconda3/envs/py3-env/lib/python3.7/site-packages/selenium/webdriver/remote/webelement.py in _execute(self, command, params)
631 params = {}
632 params['id'] = self._id
--> 633 return self._parent.execute(command, params)
634
635 def find_element(self, by=By.ID, value=None):
~/anaconda3/envs/py3-env/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py in execute(self, driver_command, params)
319 response = self.command_executor.execute(driver_command, params)
320 if response:
--> 321 self.error_handler.check_response(response)
322 response['value'] = self._unwrap_value(
323 response.get('value', None))
~/anaconda3/envs/py3-env/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py in check_response(self, response)
240 alert_text = value['alert'].get('text')
241 raise exception_class(message, screen, stacktrace, alert_text)
--> 242 raise exception_class(message, screen, stacktrace)
243
244 def _value_or_default(self, obj, key, default):
WebDriverException: Message: unknown error: bad inspector message: {"id":110297,"result":{"result":{"type":"object","value":{"status":0,"value":"<a class=\"influencer-analytics-link\" href=\"/influencers/bookishemily\"><h5><span>Emily | 18 | GB | Student\uD83C...</span></h5></a>"}}}} (Session info: chrome=75.0.3770.100) (Driver info: chromedriver=2.40.565386 (45a059dc425e08165f9a10324bd1380cc13ca363),platform=Mac OS X 10.14.0 x86_64)
I suspect it is an invalid character in
value":"Emily | 18 | GB | Student\uD83C..."
Specifically I suspect "\uD83C"
Adding
.encode("utf-8") OR .encode('ascii', 'ignore')
to the second print statement changes nothing.
Any thoughts on how to solve this??
UPDATE: The problem is with Emoji characters. I have found 3 examples to far and each has an emoji (pink flower 🌸, russian flag 🇷🇺 and swirling leaves 🍃). If I edit them out with Chrome inspector my code runs fine but this is not a solution that works at scale
This error message...
WebDriverException: Message: unknown error: bad inspector message: {"id":110297,"result":{"result":{"type":"object","value":{"status":0,"value":"<a class=\"influencer-analytics-link\" href=\"/influencers/bookishemily\"><h5><span>Emily | 18 | GB | Student\uD83C...</span></h5></a>"}}}} (Session info: chrome=75.0.3770.100) (Driver info: chromedriver=2.40.565386 (45a059dc425e08165f9a10324bd1380cc13ca363),platform=Mac OS X 10.14.0 x86_64)
...implies that the ChromeDriver was unable to parse some non-UTF-8 characters due to JSON encoding/decoding issue.
Deep Dive
As per the discussion in Issue 723592: 'Bad inspector message' errors when running URL web-platform-tests via webdriver John Chen (Owner - WebDriver for Google Chrome) in his comment mentioned:
A JSON encoding/decoding issue caused the "Bad inspector message" error reported at https://travis-ci.org/w3c/web-platform-tests/jobs/232845351. Part of the error message from part 1 contains an invalid Unicode character \uFDD0 (from https://github.com/w3c/web-platform-tests/blob/34435a4/url/urltestdata.json#L3596). The JSON encoder inside Chrome didn't detect such error, and passed it through in the JSON blob sent to ChromeDriver. ChromeDriver uses base/json/json_parser.cc to parse the JSON string. This parser does a more thorough error detection, notices that \uFDD0 is an invalid character, and reports an error. I think our JSON encoder and decoder should have exactly the same amount of error checking. It's problematic that the encoder can create a blob that is rejected by the decoder.
Analysis
John Chen (Owner - WebDriver for Google Chrome) further added:
The JSON encoding happens in protocol layout of DevTools, just before the result is sent back to ChromeDriver. The relevant code is in https://cs.chromium.org/chromium/src/out/Debug/gen/v8/src/inspector/protocol/Protocol.cpp. In particular, escapeStringForJSON function is responsible for encoding strings. It's actually quite conservative. Anything above 126 is encoded in \uXXXX format. (Note that Protocol.cpp is a generated file. The real source is https://cs.chromium.org/chromium/src/v8/third_party/inspector_protocol/lib/Values_cpp.template.)
The error occurs in the JSON parser used by ChromeDriver. The decoding of \uXXXX sequence happens at https://cs.chromium.org/chromium/src/base/json/json_parser.cc?l=564 and https://cs.chromium.org/chromium/src/base/json/json_parser.cc?l=670. After decoding an escape sequence, the decoder rejects anything that's not a valid Unicode character.
I noticed that there was a recent change to prevent a JSON encoder from emitting invalid Unicode code point: https://crrev.com/478900. Unfortunately it's not the JSON encoder used by the code involved in this bug, so it doesn't help us directly, but it's an indication that we're not the only ones affected by this type of issue.
Solution
This issue was addressed replacing invalid UTF-16 escape sequences when decoding invalid UTF strings in chromedriver as Web platform tests may use ECMAScript strings which aren't necessarily utf-16 characters through this revision / commit.
So a quick solution would be to ensure the following and re-execute your tests:
Selenium is upgraded to current levels Version 3.141.59.
ChromeDriver is updated to current ChromeDriver v79.0.3945.36 level.
Chrome is updated to current Chrome Version 79.0 level. (as per ChromeDriver v79.0 release notes)
Alternative
As an alternative you can use GeckoDriver / Firefox combination and you can find a relevant discussion in Chromedriver only supports characters in the BMP error while sending Emoji with ChromeDriver Chrome using Selenium Python to Tkinter's label() textbox

Using Python-pptx, what conditions could a PowerPoint have that give KeyError?

I have a PowerPoint that I would like to open, amend, and save as a different filename. However, I'm getting a KeyError.
I tried this code with a blank PowerPoint presentation and it works perfectly. However, when I use the code to ope an existing PowerPoint presentation and try to run the same code, I get a KeyError.
KeyError: "There is no item named 'ppt/slides/NULL' in the archive"
#Replace Source Text
import re
#s = "string. With. Punctuation?"
#s = re.sub(r'[^\w\s]','',s)
search_str = '{{{FILTER}}}'
repl_str = re.sub(r'[^\w\s]','',(str(list(dashboard_filter2.values()))))
ppt = Presentation('HispPres1.pptx')
for slide in ppt.slides:
for shape in slide.shapes:
if shape.has_text_frame:
shape.text = shape.text.replace(search_str, repl_str)
ppt.save('HispPresSourceUpdate.pptx')
I expect to have the existing PowerPoint amended by finding all the instances of {{{FILTER}}} and replacing it with the value listed. However, it looks like there's a problem using my existing PowerPoint presentation. I don't have this issue with a blank presentation.
So, I'm wondering what would cause an existing PowerPoint presentation to raise an error??? I plan on making several "templates" to start with and really need to know if there are any hardfast rules to adhere to.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-42-41deffabe2f9> in <module>()
7 search_str = '{{{FILTER}}}'
8 repl_str = re.sub(r'[^\w\s]','',(str(list(dashboard_filter2.values()))))
----> 9 ppt = Presentation('HispPres1.pptx')
10
11 for slide in ppt.slides:
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\api.py in Presentation(pptx)
28 pptx = _default_pptx_path()
29
---> 30 presentation_part = Package.open(pptx).main_document_part
31
32 if not _is_pptx_package(presentation_part):
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\opc\package.py in open(cls, pkg_file)
120 *pkg_file*.
121 """
--> 122 pkg_reader = PackageReader.from_file(pkg_file)
123 package = cls()
124 Unmarshaller.unmarshal(pkg_reader, package, PartFactory)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\opc\pkgreader.py in from_file(pkg_file)
34 pkg_srels = PackageReader._srels_for(phys_reader, PACKAGE_URI)
35 sparts = PackageReader._load_serialized_parts(
---> 36 phys_reader, pkg_srels, content_types
37 )
38 phys_reader.close()
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\opc\pkgreader.py in _load_serialized_parts(phys_reader, pkg_srels, content_types)
67 sparts = []
68 part_walker = PackageReader._walk_phys_parts(phys_reader, pkg_srels)
---> 69 for partname, blob, srels in part_walker:
70 content_type = content_types[partname]
71 spart = _SerializedPart(partname, content_type, blob, srels)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\opc\pkgreader.py in _walk_phys_parts(phys_reader, srels, visited_partnames)
102 yield (partname, blob, part_srels)
103 for partname, blob, srels in PackageReader._walk_phys_parts(
--> 104 phys_reader, part_srels, visited_partnames):
105 yield (partname, blob, srels)
106
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\opc\pkgreader.py in _walk_phys_parts(phys_reader, srels, visited_partnames)
102 yield (partname, blob, part_srels)
103 for partname, blob, srels in PackageReader._walk_phys_parts(
--> 104 phys_reader, part_srels, visited_partnames):
105 yield (partname, blob, srels)
106
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\opc\pkgreader.py in _walk_phys_parts(phys_reader, srels, visited_partnames)
99 visited_partnames.append(partname)
100 part_srels = PackageReader._srels_for(phys_reader, partname)
--> 101 blob = phys_reader.blob_for(partname)
102 yield (partname, blob, part_srels)
103 for partname, blob, srels in PackageReader._walk_phys_parts(
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\opc\phys_pkg.py in blob_for(self, pack_uri)
107 matching member is present in zip archive.
108 """
--> 109 return self._zipf.read(pack_uri.membername)
110
111 def close(self):
~\AppData\Local\Continuum\anaconda3\lib\zipfile.py in read(self, name, pwd)
1312 def read(self, name, pwd=None):
1313 """Return file bytes (as a string) for name."""
-> 1314 with self.open(name, "r", pwd) as fp:
1315 return fp.read()
1316
~\AppData\Local\Continuum\anaconda3\lib\zipfile.py in open(self, name, mode, pwd, force_zip64)
1350 else:
1351 # Get info object for name
-> 1352 zinfo = self.getinfo(name)
1353
1354 if mode == 'w':
~\AppData\Local\Continuum\anaconda3\lib\zipfile.py in getinfo(self, name)
1279 if info is None:
1280 raise KeyError(
-> 1281 'There is no item named %r in the archive' % name)
1282
1283 return info
KeyError: "There is no item named 'ppt/slides/NULL' in the archive"
Yeah, this is a bit of a thorny problem. The spec doesn't provide for a "broken" relationship (one that refers to a package-part that doesn't exist), but at least one library (Java-based if I recall correctly) does not clean up relationships properly in some cases, perhaps a slide delete operation in this case.
The gist of the explanation is this:
A PPTX file is an Open Packaging Convention (OPC) package. DOCX and XLSX files are other examples of OPC packages.
An OPC package is a Zip archive of multiple parts (official term, perhaps package-part more precisely). Each part is essentially a file, so something like slide1.xml, and they are arranged in a "directory structure".
One part can be related to other parts. For example, a presentation part (presentation.xml) is related to each of its slide parts. These relationships are stored in a file like presentation.xml.rels. The relationship is keyed with a string like "rId3" and identifies the related part by its path in the package.
One part refers to another using the key in its XML (e.g. <p:sldId r:id="rId3"/>). The target part is "looked-up" in the .rels file to find its path and get to it that way.
The KeyError you're getting means that the .rels file has a <Relationship> element referring to the part ppt/slides/NULL (instead of something like ppt/slides/slide3.xml). Since there is no such part in the package, the lookup fails.
If you open the "template" file in PowerPoint and save it, I think it will repair itself. You might need to rearrange a slide and move it back to jostle that part of the code.
If that doesn't work, you'll need to patch the package by hand, removing any broken references and relationships. opc-diag can be handy for that.
You can clean the PPTX from the dangling relations through:
File -> Info -> Check for Issues -> Inspect Document.
Clean up, save, replay python script.
So, thanks Scanny for the help. You're exactly right. The lookup was looking for ppt/slides/slide#.xml and it wasn't finding a relationship for it. The reason is because the relationships are coded as just slides/slide#.xml (without ppt/). I did get into the opc-diag to see what I could do there, but I found an easy fix.
My previous code had a line that said for slide in ppt.slides: and this was the error: KeyError: "There is no item named 'ppt/slides/NULL' in the archive". When browsed the PresentationML using opc-diag, I found that the relationship was set up like this: <Relationship Id="x" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/slide" Target="slides/slide1.xml"/>\n. The relationship does not include ppt.
So, to get rid of that lookup and have it match the way PowerPoint stores the slide relationships, I changed these lines:
ppt = Presentation('HispPres1.pptx')
for slide in ppt.slides:
to this
ppt = Presentation('HispPres1.pptx')
slides = ppt.slides
for slide in slides:

Zurb Foundation for Apps - CLI Fails2

At the risk of posting a duplicate, I am new here and don't have a rating yet so it wouldn't let me comment on the only relevant similar question I did find here:
Zurb Foundation for Apps - CLI Fails.
Zurb Foundation for Apps - CLI Fails
However I tried the answer there and I still get the same fail.
My message is :
(I don't have a reputation so I can't post images "!##"):
but it is essentially the same as the other post except mine mentions line 118 of foundationCLI.js where theirs notes line 139. Also the answer said to fix line 97 but in mine that code is on line 99.
92 // NEW
93 // Clones the Foundation for Apps template and installs dependencies
94 module.exports.new = function(args, options) {
95 var projectName = args[0];
96 var gitClone = ['git', 'clone', 'https://github.com/zurb/foundation-apps-template.git', args[0]];
97 var npmInstall = [npm, 'install'];
98 var bowerInstall = [bower, 'install'];
99 var bundleInstall = [bundle.bat];
100 if (isRoot()) bowerInstall.push('--allow-root');
101
102 // Show help screen if the user didn't enter a project name
103 if (typeof projectName === 'undefined') {
104 this.help('new');
105 process.exit();
106 }
107
108 yeti([
109 'Thanks for using Foundation for Apps!',
110 '-------------------------------------',
111 'Let\'s set up a new project.',
112 'It shouldn\'t take more than a minute.'
113 ]);
114
115 // Clone the template repo
116 process.stdout.write("\nDownloading the Foundation for Apps template...".cyan);
117 exec(gitClone, function(err, out, code) {
118 if (err instanceof Error) throw err;
119
120 process.stdout.write([
121 "\nDone downloading!".green,
122 "\n\nInstalling dependencies...".cyan,
123 "\n"
124 ].join(''));
I also posted an error log here
https://github.com/npm/npm/issues/7024
yesterday, as directed in the following error message: (which I am unable to post the image of "!##").
But I have yet to receive a response there.
Any idea how I can get past this so I can start an app?
Thanks, A
You may need to also install the git command-line client. On line 117, the foundation-cli.js is trying to run git clone and this is failing.
Could you please run
git --version
and paste the text (not image) of the output you see?
If you have installed git already (e.g., because you have Github for Windows < https://windows.github.com/ > ) then you may need to use the Git Shell shortcut or close/re-open your command prompt window in order to use git on the command line.
Once you've installed git and closed/reopened your shell, try the command
foundation-apps new myApp again.

Resources