AML service scoring using environment object - azure-machine-learning-service

How do we specify supporting source folders during scoring using Environments object.
Support there are multiple dependency files (like utils) and I have places them in a folder under source-directory, will the files get loaded to the image?

If you put the files under the source_directory, as specified in the InferenceConfig, they should get mounted within the container.
You can also bake the files into the image itself, by specifying custom Docker build steps for the Environment, for example RUN wget <url to my files> to get the files from a web location.

Related

SAP Commerce Cloud Hot Folder local setup

We are trying to use cloud hot folder functionality and in order to do so we are modifying our existing hot-folder implementation that was not implemented originally for usage within cloud.
Following the steps on this help page:
https://help.sap.com/viewer/0fa6bcf4736c46f78c248512391eb467/SHIP/en-US/4abf9290a64f43b59fbf35a3d8e5ba4d.html
We are trying to test the cloud functionality locally. I have on my machine azurite docker container running and I have modified the mentioned properties in local.properties file but it seems that the files are not being picked up by hybris in any of the cases that we are trying.
First we have in our local azurite storage a blob storage called hybris. Within this blob storage we have folders master>hotfolder, and according to docs uploading a sample.csv file into this should trigger a hot folder upload.
Also we have a mapping for our hot-folder import that scans the files within this folder: #{baseDirectory}/${tenantId}/sample/classifications. {baseDirectory} is configured using a property like so: ${HYBRIS_DATA_DIR}/sample/import
Can we keep these mappings within our hot folder xml definitions, or do we need to change them?
How should the blob container be named in order for it to be accessible to hybris?
Thank you very much,
I would be very happy to provide any further information.
In the end I did manage to run cloud hot folder imports on local machine.
It was a matter of correctly configuring a number of properties that are used by cloudhotfolder and azurecloudhotfolder extensions.
Simply use the following properties to set the desired behaviour of the system:
cluster.node.groups=integration,yHotfolderCandidate
azure.hotfolder.storage.account.connection-string=DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:32770/devstoreaccount1;
azure.hotfolder.storage.container.hotfolder=${tenantId}/your/path/here
cloud.hotfolder.default.mapping.file.name.pattern=^(customer|product|url_media|sampleFilePattern|anotherFileNamePattern)-\\d+.*
cloud.hotfolder.default.images.root.url=http://127.0.0.1:32785/devstoreaccount1/${azure.hotfolder.storage.container.name}/master/path/to/media/folder
cloud.hotfolder.default.mapping.header.catalog=YourProductCatalog
And that is it, if there are existing routings for traditional hot folder import, these can also be used but their mappings should be in the value of
cloud.hotfolder.default.mapping.file.name.pattern
property.
I am trying the same - to set up a local dev env to test out the cloud hotfolder. It seems that you have had some success. Can you provide where you located the azurecloudhotfolder - which is called out here https://help.sap.com/viewer/0fa6bcf4736c46f78c248512391eb467/SHIP/en-US/4abf9290a64f43b59fbf35a3d8e5ba4d.html
Thanks

`cp` vs `rsync` vs something faster

I am using Docker and Docker cannot COPY symlinked files into the image. But the files that are symlinked are not in the 'build context'. So I was going to copy them into the build context with cp, but that's really slow. Is there some way to share the files on two different locations on disk without have to copy them and without using symlinks?
This is not allowed and it won't be
https://github.com/moby/moby/issues/1676
We do not allow this because it's not repeatable. A symlink on your machine is the not the same as my machine and the same Dockerfile would produce two different results. Also having symlinks to /etc/paasswd would cause issues because it would link the host files and not your local files.
If you have common files which are needed in every container then I would put all of them in a shared image and use docker multi build options
FROM mysharedimage as shared
FROM alpine
COPY --from=shared /my/common/stuff /common
....
Again still not the most elegant solution but, because when you do docker build the current context is zipped and sent to the docker daemon, soft links won't work.
You can create hard links but then hard links point to inodes and they don't show you which file they point to. Soft links on other tell you where they point to but the build doesn't sent them.
ln /source/file /dest/file
So your call really what you want to do and how you want to.

Where are source files stored on Google Cloud Platform when deployed from local machine

I have just deployed the basic NodeJS express app on Google Cloud Platform from IntelliJ IDEA. However I cannot find and browse the source files. I have searched in the Development tab, App Engine tab and it shows the project but not the actual files. I can access the application from my browser and it is running fine. I can see the activity and requests everything coming into the application but I cannot see the source files. I tried searching for them in the terminal Google Cloud Console and I cannot locate the files in there either. It's puzzling because I don't know where the files are being served from.
AFAIK seeing the live app code/static content directly in the developer console is not possible (at least not yet), not even for the standard environment apps.
For apps using the flexible environment (that includes node.js apps) accessing the live app source code may be even more complex as what's actually executed on GAE is a container image/docker file (as opposed to plain app code source file from a standard environment app). From Deploying your program:
Deploy your app using the gcloud app deploy command. This command
automatically builds a container image for you by using the Container
Builder service (Beta) before deploying the image to the App
Engine flexible environment control plane. The container will include
any local modifications you've made to the runtime image.
Since the container images are fundamentally dockerfiles it might be possible to extract their content using the docker export command:
Usage: docker export [OPTIONS] CONTAINER
Export a container's filesystem as a tar archive
Options:
--help Print usage
-o, --output string Write to a file, instead of STDOUT
The docker export command does not export the contents of volumes
associated with the container. If a volume is mounted on top of an
existing directory in the container, docker export will export the
contents of the underlying directory, not the contents of the
volume.
One way of checking the exact structure of the deployed app (at least in the standard environment) is to download the app code and check it locally - may be useful if suspected incorrect app deployment puts a question mark on the local app development repository from where deployment originated. Not sure if this is possible with the flexible environment, tho.
The accepted answer to the recent Deploy webapp on GAE then do changes online from GAE console post appears to indicate that reading and maybe even modifying live app code might be possible (but I didn't try it myself and it's not clear if it would also work for the flexible environment).

access certain folder in azure cloud service

In my code (which has worker role) I need to specify a path to a directory (third party library requires it). Locally I've included folder into project and just give full path to it. However after deployment of course I need a new path. How do I confirm that whole folder has been deployed and how do I determine a new path to it?
Edit:
I added folder to the role node in visual studio and accessed it like this: Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), "my_folder");
Will this directory be used for reading and writing? If yes, you should use a LocalStorage resource. https://azure.microsoft.com/en-us/documentation/articles/cloud-services-configure-local-storage-resources/ shows how to use this.
If the directory is only for reading (ie. you have binaries or config files there), then you can use the %RoleRoot% environment variable to identify the path where your package was deployed to, then just append whatever folder you refernced in your project (ie. %RoleRoot%\Myfiles).
I'd take a slightly different approach. Place the 3rd party package into Windows Azure blob storage, then during role startup, you can download/extract it and place the files into the available Local storage (giving it whatever permissions the app needs). Then leverage that location from your application via the same local storage configuration entry.
This should help you reduce the size of your deployment package as well as give you the ability to update the 3rd party components without completely redeploying your solution. And by leveraging it on startup, you can guarantee that the files will be there in case the role instance gets torn down and rebuilt.

Configure the multimedia components to get published inside the website folder

Need to publish the JS and CSS files using multimedia components.
The multimedia components gets published outside the website folder similar to the Images in
“C:\tridion\temp\pub14\Includes\scripts”
Any Idea How to Configure the multimedia components to get published inside the website folder.
We are using IIS to deploy website
You can control this by editing cd_storage_conf.xml. In this file you can configure where binaries go for each publication. For example: you probably have something like this configured: <Item typeMapping="Binary" cached="false" storageId="myStorageId"/> and this myStorageId storage is defined inside the Storages element like:
<Storage Type="filesystem" Class="com.tridion.storage.filesystem.FSDAOFactory"
Id="myStorageId" defaultFilesystem="true" defaultStorage="true">
<Root Path="c:\temp\" />
</Storage>
If that is the case then you need to change the root path to point to the root of your WebApplication in IIS. More about how to configure the storage you have here (logon required).
You can overide the path of your multimedia binary using template code as long as you have structure group created for same path.
Ex if you want to publish particular binary to \css\images folder, you first have to create the structure group for same path(\css\images) and use the structure group id in the following code to publish the binary.
engine.AddBinary(Binary.Id, templateID, binaryStructureGroupID, Binary.BinaryContent.GetByteArray(), Binary.FileName);

Resources