Per this very helpful Q&A on StackOverflow I am able to build a private conda package and then install it by placing it in a specific folder. I can also host it somewhere on the web and simply use the URL as the channel, with the prefix url://.
In order to keep my code private, I put the conda channel in Azure Blob Storage and created an SAS to access it. So in theory, the way to keep it private is that only someone with the full SAS URL including the token can access it.
The problem is, the SAS format is in the form of a URL query: https://<storage-name>.blob.core.windows.net/<container-name>?se=2019-07-24T02%3A53%3A48Z&sp=rl&sv=2018-03-28&sr=c&comp=list&restype=container&sig=REDACTED_TOKEN, so when I pass it to conda it breaks the URL after the ? and doesn't use the full URL, and gets a 404 in response. See the Microsoft docs for the full specification.
PowerShell example:
PS C:\Users\ydima> $sas = "https://REDACTED.blob.core.windows.net/conda-channel-1?se=2019-07-24T02%3A53%3A48Z&sp=rl&sv=2018-03-28&sr=c&comp=list&restype=container&sig=REDACTED"
PS C:\Users\ydima> conda install -c "url:///"$sas crawford-utils
Collecting package metadata (current_repodata.json): failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://url/win-64/current_repodata.json>
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
ConnectionError(MaxRetryError("HTTPSConnectionPool(host='url', port=443): Max retries exceeded with url: /win-64/current_repodata.json (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x0000026183E54EB8>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))"))
'sp' is not recognized as an internal or external command,
operable program or batch file.
'sv' is not recognized as an internal or external command,
operable program or batch file.
'sr' is not recognized as an internal or external command,
operable program or batch file.
Name of second file to compare:
Any idea how I can get conda to use a URL that includes queries in the body?
According to the source code of Channel class of conda tool as the figure below, it does not support the query string in an URL.
So if you want to use a container in Azure Blob Storage as a channel of a private conda mirror, you need to set public access level for the container or directly use the feature of static web hosting of Azure Storage.
Otherwise, a possible workaround solution is to set a custom proxy for conda tool to help automatically adding the sas token query string at the end of each resource url of conda channel, please refer to the document Using the .condarc conda configuration file to know how to set proxy server in the .condarc file.
Hope it helps.
So I found a cool solution. I create an Azure Storage account, and then create a blob container that is publicly accessible, but to protect it I name the blob container something random - like a long random string, which in effect can act as a token. So for example, in PowerShell:
PS C:> $azStorageName = "mystorage"
PS C:> $blobName = -join ((97..122) | Get-Random -Count 26 | ForEach-Object {[char]$_})
PS C:> "https://$azStorageName.blob.core.windows.net/$blobName"
https://mystorage.blob.core.windows.net/fwsjtizbpvaerukdomqhlgnycx
The code to generate the random string is based on this post.
It has 26^26 possible combinations, which I think is good enough for a secure token for this purpose.
I also plan on setting up CI in say Azure Pipelines, so that I can automate all this whenever I push code to the private GitHub repo.
Related
Faced a data copy issue using Azure Fabric Data - minIO
In Azure Fabric Data, I set up a connection to my minIO server, the test connection goes well, I see all the bucket
BUT when I try to access the contents of the cart I get an error
The operation has timed out"
or "The file operation is failed. A WebException with status NameResolutionFailure was thrown. The remote name could not be resolved: 'bucket-2.miniomyserverhost.com' Activity ID: 0794a825-7ba4-dfec4cfc8846"
Judging by the error, I need to adjust the "path style" URLs
But I can't find an example of how to do it right.
Can you suggest
Registered two parameters in the configuration file
MINIO_DOMAIN=(host)domain
MINIO_SERVER_URL=http://(host)domain
Here is my suggestion. To resolve the issue of "The operation has timed out" or:
The file operation is failed. A WebException with status NameResolutionFailure was thrown. The remote name could not be resolved
…while accessing the contents of a bucket in Azure Fabric Data using minIO, you need to set up the "path style" URLs correctly.
You have registered two parameters in the configuration file, MINIO_DOMAIN and MINIO_SERVER_URL. To use path style URLs, set MINIO_SERVER_URL to http://(host):9000.
Here is an example:
MINIO_DOMAIN=myminioserver.com
MINIO_SERVER_URL=http://myminioserver.com:9000
Make sure to replace myminioserver.com with the actual hostname or IP address of your minIO server.
I use Apache Airflow for daily ETL jobs. I installed it in Azure Kubernetes Service using the provided Helm chart. It's been running fine for half a year, but since recently I'm unable to access the logs in the webserver (this used to always work fine).
I'm getting the following error:
*** Log file does not exist: /opt/airflow/logs/dag_id=analytics_etl/run_id=manual__2022-09-26T09:25:50.010763+00:00/task_id=copy_device_table/attempt=18.log
*** Fetching from: http://airflow-worker-0.airflow-worker.default.svc.cluster.local:8793/dag_id=analytics_etl/run_id=manual__2022-09-26T09:25:50.010763+00:00/task_id=copy_device_table/attempt=18.log
*** !!!! Please make sure that all your Airflow components (e.g. schedulers, webservers and workers) have the same 'secret_key' configured in 'webserver' section and time is synchronized on all your machines (for example with ntpd) !!!!!
****** See more at https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#secret-key
****** Failed to fetch log file from worker. Client error '403 FORBIDDEN' for url 'http://airflow-worker-0.airflow-worker.default.svc.cluster.local:8793/dag_id=analytics_etl/run_id=manual__2022-09-26T09:25:50.010763+00:00/task_id=copy_device_table/attempt=18.log'
For more information check: https://httpstatuses.com/403
What have I tried:
I've made sure that the log file exists (I can exec into the airflow-worker-0 pod and read the file on command line in the location specified in the error).
I've rolled back my deployment to an earlier commit from when I know for sure it was still working, but it made no difference.
I was using webserverSecretKeySecretName in the values.yaml configuration. I changed the secret to which that name was pointing (deleted it and created a new one, as described here: https://airflow.apache.org/docs/helm-chart/stable/production-guide.html#webserver-secret-key) but it didn't work (no difference, same error).
I changed the config to use a webserverSecretKey instead (in plain text), no difference.
My thoughts/observations:
The error states that the log file doesn't exist, but that's not true. It probably just can't access it.
The time is the same in all pods (I double checked be exec-ing into them and typing date in the command line)
The webserver secret is the same in the worker, the scheduler, and the webserver (I double checked by exec-ing into them and finding the corresponding env variable)
Any ideas?
Turns out this was a known bug with the latest release (2.4.0) of the official Airflow Helm chart, reported here:
https://github.com/apache/airflow/discussions/26490
Should be resolved in version 2.4.1 which should be available in the next couple of days.
I have a script that retrieves a login for ECR, authenticates a DockerClient instance with the login credentials (reauth set to True), and then attempts to pull a nominated container image.
The code seems to work perfectly when running on my local machine interacting with docker daemon on an EC2 instance, but when running from the EC2 instance I am constantly getting
404 Client Error: Not Found ("repository XXXXXXXX.dkr.ecr.eu-west-2.amazonaws.com/autohld-runner not found: does not exist or no pull access")
The same repo is being used for both executing the code locally and remotely on the EC2 instance. I have tried setting the access to the image within ECR to allow pull for both everyone and my AWS ID. I have granted the role assigned to the EC2 instance Full Admin access also. All with no joy.
If I perform the same tasks on the EC2 instance via command line with the exact same repo URI (copied from the error), it works with no issue.
Is there something I am missing within docker-py ?
url = "tcp://127.0.0.1:2375"
dockerd = docker.DockerClient(base_url=url, version='auto')
dockerd.login(username=ecr.username, password=ecr.password, email='none', registry=ecr.registry, reauth=True)
dockerd.images.pull(ecr.get_repo(instance.tags['Container']), tag='latest')
get_repo returns the full URI as reported in the error message, the Container element is the name 'autohld-runner'
Thanks
It seems that if the registry has been accessed via the cli then an auth token or something is set and docker remembers this allowing subsequent calls to work. However in this case the instance is starting up completely fresh and using the login method within docker-py.
This doesn't seem to pass the credentials on to the pull, I have found that using the auth_config named argument and passing in a dictionary of auth parameters works.
auth_creds = {'username': ecr.username, 'password': ecr.password}
dockerd.images.pull(ecr.get_repo(instance.tags['Container']), tag='latest', auth_config=auth_creds)
HTH
I'm trying to read files from an Azure storage account. In particular, I'd like to read all files contained in a certain folder, for example:
lines = sc.textFile('/path_to_azure_folder/*')
I am not quite sure what the path should be. I tried with the URL service blob endpoint, from Azure, followed by the folder path (I tried with both http and https):
lines = sc.textFile('https://container_name.blob.core.windows.net/path_to_folder/*')
and did not work:
diagnostics: Application XXXXXX failed 5 times due to AM Container for
XXXXXXXX exited with exitCode: 1 Diagnostics: Exception from
container-launch. Container id: XXXXXXXXX Exit code: 1
the URL I provided is the same I'm getting with CyberDuck App, when I click on 'Info'.
Your path should look like this
lines = sc.textFile("wasb://containerName#$storageAccountName.blob.core.windows.net/folder_path/*")
This should solve your your issue.
If you are trying to read all the blobs in an Azure Storage account, you might want to look into the tools and libraries we offer for retrieving and manipulating your data. Getting started doc here.
Hope this is helpful!
I can't get SqlDataProvider to work when executed in a fsx script which is running in an Azure Web Site.
I have started from the samples that Tomas Petrecek has here: https://github.com/tpetricek/Dojo-Suave-FsHome.
In short it is a FSX script that is executed using the IIS httpPlatformHandler so that all http requests to my Azure Web site is forwarded to my F# script.
The F# Script use Suave to handle the requests.
When I tried adding some database access to my HTTP handlers I got into problems.
The problematic code looks like this:
[<Literal>]
let connStr = "Server=(localdb)\\v11.0;Initial Catalog=My_Database;Integrated Security=true;"
[<Literal>]
let resolutionFolder = __SOURCE_DIRECTORY__
FSharp.Data.Sql.Common.QueryEvents.SqlQueryEvent |> Event.add (printfn "Executing SQL: %s")
// the following line fails when executing in azure
type db = SqlDataProvider<connStr, Common.DatabaseProviderTypes.MSSQLSERVER, ResolutionPath = resolutionFolder>
let saveData someDataToSave =
let ctx = db.GetDataContext(Environment.GetEnvironmentVariable("SQLAZURECONNSTR_QUERIES"))
.....
/// code using the context here
This works just fine when I run it locally, but when I deploy it to the azure site it will fail at the line where the type dbis created.
The error message is (line 70 is the line that has the type db = ...:
D:\home\site\wwwroot\app.fsx(70,11): error FS3033: The type provider
'FSharp.Data.Sql.SqlTypeProvider' reported an error: A network-related
or instance-specific error occurred while establishing a connection to
SQL Server. The server was not found or was not accessible. Verify
that the instance name is correct and that SQL Server is configured to
allow remote connections. (provider: SQL Network Interfaces, error: 52
- Unable to locate a Local Database Runtime installation. Verify that SQL Server Express is properly installed and that the Local Database
Runtime feature is enabled.)
The design-time database in the connStr is not available in the azure site, but I thought this is why we have the GetDataContext overload that takes a connection string to be used at run-time?
Is it because it is running as a script and not as compiled code that it is trying to access the database when creating the TypeProvider?
If yes, does it mean that my only option is to compile and provide the database code as a compiled assembly that I load and use in my Suave FSX script?
Reading the connection string from a config file does not work very well as this is in a azure site. I really need to get the connection string from an environment variable (which is set in the azure management interface).
Hmm, this is a bit unfortunate - as #Fyodor mentioned in the comments, the problem is that the script-based deployment to Azure actually compiles the script on the Azure machine - and so you need to have a statically-resolved connection string that works on Azure.
There are two options:
Use compiled project instead. If you compile your F# code locally and deploy the compiled code to Azure it will work. Sadly, there are no good samples for that.
Do some clever trick to make the connection string accessible to the script at compile time.
Send a PR to the SQL provider so that you can give it the name of an environment variable and it reads the connection string from there.
I think (3) would actually be quite nice and useful feature.
I'm not necessarily sure what the best way to do (2) would be. But I think you might be able to modify app.azure.fsx so that it creates a file (say connection.fsx) that contains something like:
module Connection
let [<Literal>] ConnString = "<Contents of SQLAZURECONNSTR_QUERIES>"
Then app.fsx could load this script and use Connection.ConnString in the argument of SQL type provider.