I am trying to install the KNN plugin to get the following code working:
from elasticsearch import Elasticsearch
from elasticsearch.helpers import bulk
es = Elasticsearch('http://127.0.0.1:9200',verify_certs=False)
settings={
"settings":{
"index": {
"knn": "true",
"knn.space_type": "cosinesimil"
}
},
"mappings":{
"dynamic": "true",
"_source":{
"enabled":"true"
},
"properties":{
"vector": {
"type": "knn_vector",
"dimension": "768"
}
}
}
}
es.indices.create(index = 'document_embeddings', ignore=[400,404], body = settings)
but am getting the following error:
{'error': {'root_cause': [{'type': 'illegal_argument_exception',
'reason': 'unknown setting [index.knn] please check that any required plugins are installed, or check the breaking changes documentation for removed settings'}],
'type': 'illegal_argument_exception',
'reason': 'unknown setting [index.knn] please check that any required plugins are installed, or check the breaking changes documentation for removed settings',
'suppressed': [{'type': 'illegal_argument_exception',
'reason': 'unknown setting [index.knn.space_type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings'}]},
'status': 400}
I think the cause is that the KNN plugin is not installed, so I am installing it in the following way where thee zip file is downloaded from here.
-> Installing file://elastiknn-8.6.1.0.zip
-> Downloading file://elastiknn-8.6.1.0.zip
-> Failed installing file://elastiknn-8.6.1.0.zip
-> Rolling back file://elastiknn-8.6.1.0.zip
-> Rolled back file://elastiknn-8.6.1.0.zip
Exception in thread "main" java.net.UnknownHostException: elastiknn-8.6.1.0.zip
at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:560)
at java.base/java.net.Socket.connect(Socket.java:666)
at java.base/sun.net.ftp.impl.FtpClient.doConnect(FtpClient.java:1045)
at java.base/sun.net.ftp.impl.FtpClient.tryConnect(FtpClient.java:1010)
at java.base/sun.net.ftp.impl.FtpClient.connect(FtpClient.java:1102)
at java.base/sun.net.ftp.impl.FtpClient.connect(FtpClient.java:1088)
at java.base/sun.net.www.protocol.ftp.FtpURLConnection.connect(FtpURLConnection.java:318)
at java.base/sun.net.www.protocol.ftp.FtpURLConnection.getInputStream(FtpURLConnection.java:424)
at org.elasticsearch.plugins.cli.InstallPluginAction.downloadZip(InstallPluginAction.java:465)
at org.elasticsearch.plugins.cli.InstallPluginAction.download(InstallPluginAction.java:329)
at org.elasticsearch.plugins.cli.InstallPluginAction.execute(InstallPluginAction.java:247)
at org.elasticsearch.plugins.cli.InstallPluginCommand.execute(InstallPluginCommand.java:89)
at org.elasticsearch.common.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:85)
at org.elasticsearch.cli.MultiCommand.execute(MultiCommand.java:94)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:85)
at org.elasticsearch.cli.Command.main(Command.java:50)
at org.elasticsearch.launcher.CliToolLauncher.main(CliToolLauncher.java:64)
The command that I am running to install the plugin is:
sudo bin/elasticsearch-plugin install file://elastiknn-8.6.1.0.zip
The version of my elasticsearch i.e. 8.6.1 seems perfectly compatible with that of elastiknn. Can anyone please tell me how can I get a working setup of KNN with elasticsearch in my mac? Thanks!
Use type dense vector. Read this documentation.
Related
I'm troubleshooting a webpack error.
Command: bin/webpack --colors --progress
Produces this error:
ERROR in ./node_modules/#flatfile/sdk/dist/index.js 351:361
Module parse failed: Unexpected token (351:361)
File was processed with these loaders:
* ./node_modules/babel-loader/lib/index.js
You may need an additional loader to handle the result of these loaders.
| class v extends i {
| constructor(e, t) {
> super(e), r(this, "code", "FF-UA-00"), r(this, "name", "UnauthorizedError"), r(this, "debug", "The JWT was not signed with a recognized private key or you did not provide the necessary information to identify the end user"), r(this, "userMessage", "There was an issue establishing a secure import session."), e && (this.debug = e), this.code = t ?? "FF-UA-00";
| }
| }
# ./app/javascript/src/app/pages/content_assets/Index.vue?vue&type=script&lang=ts& (./node_modules/babel-loader/lib??ref--8-0!./node_modules/vue-loader/lib??vue-loader-options!./app/javascript/src/app/pages/content_assets/Index.vue?vue&type=script&lang=ts&) 22:0-41 125:6-14
# ./app/javascript/src/app/pages/content_assets/Index.vue?vue&type=script&lang=ts&
# ./app/javascript/src/app/pages/content_assets/Index.vue
# ./app/javascript/packs/app.js
NOTES
I found what appears to be an identical issue reported in the Flatfile project: https://github.com/FlatFilers/sdk/issues/83
Looks like ES2020 was emitted to the /dist folder so my cra babel loader is not able to parse it, in order to fix it I need to include the path on my webpack config.
Node v16.13.1
We're using webpack with a Rails project via the webpacker package (#rails/webpacker": "5.4.3") which is depending on webpack#4.46.0.
When I change to Node v14x and rebuild node_modules (yarn install) webpack compiles successfully.
The line referenced in the error (351:361) does not exist when I go check the file in node_modules/
We have a yarn.lock file, which I delete and recreate before running yarn install. I also delete the node_modules directory to ensure a "fresh" download of the correct packages.
We have a babel.config.js file...
module.exports = function(api) {
var validEnv = ['development', 'test', 'production']
var currentEnv = api.env()
var isDevelopmentEnv = api.env('development')
var isProductionEnv = api.env('production')
var isTestEnv = api.env('test')
if (!validEnv.includes(currentEnv)) {
throw new Error(
'Please specify a valid `NODE_ENV` or ' +
'`BABEL_ENV` environment variables. Valid values are "development", ' +
'"test", and "production". Instead, received: ' +
JSON.stringify(currentEnv) +
'.'
)
}
return {
presets: [
isTestEnv && [
'#babel/preset-env',
{
targets: {
node: 'current'
}
}
],
(isProductionEnv || isDevelopmentEnv) && [
'#babel/preset-env',
{
forceAllTransforms: true,
useBuiltIns: 'entry',
corejs: 3,
modules: false,
exclude: ['transform-typeof-symbol']
}
],
["babel-preset-typescript-vue", { "allExtensions": true, "isTSX": true }]
].filter(Boolean),
plugins: [
'babel-plugin-macros',
'#babel/plugin-syntax-dynamic-import',
isTestEnv && 'babel-plugin-dynamic-import-node',
'#babel/plugin-transform-destructuring',
[
'#babel/plugin-proposal-class-properties',
{
loose: true
}
],
[
'#babel/plugin-proposal-object-rest-spread',
{
useBuiltIns: true
}
],
[
'#babel/plugin-transform-runtime',
{
helpers: false,
regenerator: true,
corejs: false
}
],
[
'#babel/plugin-transform-regenerator',
{
async: false
}
]
].filter(Boolean)
}
}
Ultimately I want to get webpack to compile. If you had advice about any of the following questions, it would help a lot.
Why would changing the Node version (only) cause different webpack behavior? We aren't changing the the webpack version or the version of the #flatfile package that's causing the error.
Why is the error pointing to a line that doesn't exist in the package? Is this evidence of some kind of caching problem?
Does the workaround mentioned in the linked GitHub issue shed light on my problem?
I'll take a stab at this.
I believe your issue is that webpack 4 does not support the nullish coalescing operator due to it's dependency on acorn 6. See this webpack issue and this PR comment.
You haven't specified the exact minor version of Node.js 14x that worked for you. I will assume it was a version that did not fully support the nullish coalescing operator, or at least a version that #babel/preset-env's target option understood to not support ??, so it was transpiled by babel and thus webpack didn't complain. You can see what versions of node support nullish coalescing on node.green.
I don't fully understand the point you are making here, so not focusing on this in the proposed solution.
I'm not sure what the proposed workaround is in the linked issue, maybe the comment about "include the path on my webpack config", but yes the issue does seem relevant as it is pointing out the nullish coalescing operator as the source of the issue.
You can try to solve this by
adding #babel/plugin-proposal-nullish-coalescing-operator to your babel config's plugins
updating your webpack config to run #flatfile/sdk through babel-loader to transpile the nullish coalescing operator:
{
test: /\.jsx?$/,
exclude: filename => {
return /node_modules/.test(filename) && !/#flatfile\/sdk\/dist/.test(filename)
},
use: ['babel-loader']
}
Another possibility is to upgrade webpacker to a version that depends upon webpack v5.
One final remark, when you say
We have a yarn.lock file, which I delete and recreate before running yarn install.
you probably should not be deleting the lock file before each install, as it negates the purpose of a lock file altogether.
As title, I want to host azure function in local with VSCode but something error.
Python version 3.9.12 (python3).
Azure Functions Core Tools
Core Tools Version: 4.0.4483 Commit hash: N/A (64-bit)
Function Runtime Version: 4.1.3.17473
host.json:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
}
}
local.setting.json:
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AzureWebJobsStorage": ""
}
}
Error Message:
Functions:
HttpTrigger1: [GET,POST] http://localhost:7071/api/HttpTrigger1
For detailed output, run func with --verbose flag.
....
[2022-05-09T06:52:10.300Z] from . import dispatcher
[2022-05-09T06:52:10.300Z] File "/opt/homebrew/Cellar/azure-functions-core-tools#4/4.0.4483/workers/python/3.9/OSX/X64/azure_functions_worker/dispatcher.py", line 19, in <module>
[2022-05-09T06:52:10.300Z] import grpc
[2022-05-09T06:52:10.300Z] File "/opt/homebrew/Cellar/azure-functions-core-tools#4/4.0.4483/workers/python/3.9/OSX/X64/grpc/__init__.py", line 23, in <module>
[2022-05-09T06:52:10.300Z] from grpc._cython import cygrpc as _cygrpc
[2022-05-09T06:52:10.300Z] ImportError: dlopen(/opt/homebrew/Cellar/azure-functions-core-tools#4/4.0.4483/workers/python/3.9/OSX/X64/grpc/_cython/cygrpc.cpython-39-darwin.so, 0x0002): tried: '/opt/homebrew/Cellar/azure-functions-core-tools#4/4.0.4483/workers/python/3.9/OSX/X64/grpc/_cython/cygrpc.cpython-39-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')), '/usr/local/lib/cygrpc.cpython-39-darwin.so' (no such file), '/usr/lib/cygrpc.cpython-39-darwin.so' (no such file)
[2022-05-09T06:52:13.512Z] Host lock lease acquired by instance ID '0000000000000000000000008F1C7F2E'.
After reproducing from our end we observed that If you have an arm64 Python, it'll never be able to load an x86_64 shared library hence we need to enable Rosetta which works at a process by process level.
Steps to be followed
Check the Rosetta in iTerm.
Install homebrew, azure functions core tools, and python in the current homebrew.
And then run your azure function.
REFERENCES:
Support running on M1 Macs [Python]
I am trying to access the snowflake datasource using "great_expectations" library.
The following is what I tried so far:
from ruamel import yaml
import great_expectations as ge
from great_expectations.core.batch import BatchRequest, RuntimeBatchRequest
context = ge.get_context()
datasource_config = {
"name": "my_snowflake_datasource",
"class_name": "Datasource",
"execution_engine": {
"class_name": "SqlAlchemyExecutionEngine",
"connection_string": "snowflake://myusername:mypass#myaccount/myDB/myschema?warehouse=mywh&role=myadmin",
},
"data_connectors": {
"default_runtime_data_connector_name": {
"class_name": "RuntimeDataConnector",
"batch_identifiers": ["default_identifier_name"],
},
"default_inferred_data_connector_name": {
"class_name": "InferredAssetSqlDataConnector",
"include_schema_name": True,
},
},
}
print(context.test_yaml_config(yaml.dump(datasource_config)))
I initiated great_expectation before executing above code:
great_expectations init
but I am getting the error below:
great_expectations.exceptions.exceptions.DatasourceInitializationError: Cannot initialize datasource my_snowflake_datasource, error: 'NoneType' object has no attribute 'create_engine'
What am I doing wrong?
Your configuration seems to be ok, corresponding to the example here.
If you look at the traceback you should notice that the error propagates starting at the file great_expectations/execution_engine/sqlalchemy_execution_engine.py in your virtual environment.
The actual line where the error occurs is:
self.engine = sa.create_engine(connection_string, **kwargs)
And if you search for that sa at the top of that file:
import sqlalchemy as sa
make_url = import_make_url()
except ImportError:
sa = None
So sqlalchemy is not installed, which you
don't get automatically in your environement if you install greate_expectiations. The thing to do is to
install snowflake-sqlalchemy, since you want to use sqlalchemy's snowflake
plugin (assumption based on your connection_string).
/your/virtualenv/bin/python -m pip install snowflake-sqlalchemy
After that you should no longer get an error, it looks like test_yaml_config is waiting for the connection
to time out.
What worries me greatly is the documented use of a deprecated API of ruamel.yaml.
The function ruamel.yaml.dump is going to be removed in the near future, and you
should use the .dump() method of a ruamel.yaml.YAML() instance.
You should use the following code instead:
import sys
from ruamel.yaml import YAML
import great_expectations as ge
context = ge.get_context()
datasource_config = {
"name": "my_snowflake_datasource",
"class_name": "Datasource",
"execution_engine": {
"class_name": "SqlAlchemyExecutionEngine",
"connection_string": "snowflake://myusername:mypass#myaccount/myDB/myschema?warehouse=mywh&role=myadmin",
},
"data_connectors": {
"default_runtime_data_connector_name": {
"class_name": "RuntimeDataConnector",
"batch_identifiers": ["default_identifier_name"],
},
"default_inferred_data_connector_name": {
"class_name": "InferredAssetSqlDataConnector",
"include_schema_name": True,
},
},
}
yaml = YAML()
yaml.dump(datasource_config, sys.stdout, transform=context.test_yaml_config)
I'll make a PR for great-excpectations to update their documentation/use of ruamel.yaml.
I am trying to write a small program using the AzureML Python SDK (v1.0.85) to register an Environment in AMLS and use that definition to construct a local Conda environment when experiments are being run (for a pre-trained model). The code works fine for simple scenarios where all dependencies are loaded from Conda/ public PyPI, but when I introduce a private dependency (e.g. a utils library) I am getting a InternalServerError with the message "Error getting recipe specifications".
The code I am using to register the environment is (after having authenticated to Azure and connected to our workspace):
environment_name = config['environment']['name']
py_version = "3.7"
conda_packages = ["pip"]
pip_packages = ["azureml-defaults"]
private_packages = ["./env-wheels/utils-0.0.3-py3-none-any.whl"]
print(f"Creating environment with name {environment_name}")
environment = Environment(name=environment_name)
conda_deps = CondaDependencies()
print(f"Adding Python version: {py_version}")
conda_deps.set_python_version(py_version)
for conda_pkg in conda_packages:
print(f"Adding Conda denpendency: {conda_pkg}")
conda_deps.add_conda_package(conda_pkg)
for pip_pkg in pip_packages:
print(f"Adding Pip dependency: {pip_pkg}")
conda_deps.add_pip_package(pip_pkg)
for private_pkg in private_packages:
print(f"Uploading private wheel from {private_pkg}")
private_pkg_url = Environment.add_private_pip_wheel(workspace=ws, file_path=Path(private_pkg).absolute(), exist_ok=True)
print(f"Adding private Pip dependency: {private_pkg_url}")
conda_deps.add_pip_package(private_pkg_url)
environment.python.conda_dependencies = conda_deps
environment.register(workspace=ws)
And the code I am using to create the local Conda environment is:
amls_environment = Environment.get(ws, name=environment_name, version=environment_version)
print(f"Building environment...")
amls_environment.build_local(workspace=ws)
The exact error message being returned when build_local(...) is called is:
Traceback (most recent call last):
File "C:\Anaconda\envs\AMLSExperiment\lib\site-packages\azureml\core\environment.py", line 814, in build_local
raise error
File "C:\Anaconda\envs\AMLSExperiment\lib\site-packages\azureml\core\environment.py", line 807, in build_local
recipe = environment_client._get_recipe_for_build(name=self.name, version=self.version, **payload)
File "C:\Anaconda\envs\AMLSExperiment\lib\site-packages\azureml\_restclient\environment_client.py", line 171, in _get_recipe_for_build
raise Exception(message)
Exception: Error getting recipe specifications. Code: 500
: {
"error": {
"code": "ServiceError",
"message": "InternalServerError",
"detailsUri": null,
"target": null,
"details": [],
"innerError": null,
"debugInfo": null
},
"correlation": {
"operation": "15043e1469e85a4c96a3c18c45a2af67",
"request": "19231be75a2b8192"
},
"environment": "westeurope",
"location": "westeurope",
"time": "2020-02-28T09:38:47.8900715+00:00"
}
Process finished with exit code 1
Has anyone seen this error before or able to provide some guidance around what the issue may be?
The issue was with out firewall blocking the required requests between AMLS and the storage container (I presume to get the environment definitions/ private wheels).
We resolved this by updating the firewall with appropriate ALLOW rules for the AMLS service to contact and read from the attached storage container.
Assuming that you'd like to run in the script on a remote compute, then my suggestion would be to pass the environment you just "got". to a RunConfiguration, then pass that to an ScriptRunConfig, Estimator, or a PythonScriptStep
from azureml.core import ScriptRunConfig
from azureml.core.runconfig import DEFAULT_CPU_IMAGE
src = ScriptRunConfig(source_directory=project_folder, script='train.py')
# Set compute target to the one created in previous step
src.run_config.target = cpu_cluster.name
# Set environment
amls_environment = Environment.get(ws, name=environment_name, version=environment_version)
src.run_config.environment = amls_environment
run = experiment.submit(config=src)
run
Check out the rest of the notebook here.
If you're looking for a local run this notebook might help.
I am using google-closure-compiler grunt task to minify javascript files. I've defined task like : -
'closure-compiler': {
deviceDetails: {
files: {
'target.min.js: 'source.js'
},
options: {
compilation_level: 'SIMPLE'
}
// args: [
// '--js', 'source.js',
// '--compilation_level', 'SIMPLE',
// '--js_output_file', 'out.js',
// '--debug'
// ]
}
This gives me an error
[ { '29': 1,
_state: 2,
_result: [Error: not implemented],
_subscribers: [] } ]
Warning: Compilation error Use --force to continue.
Aborted due to warnings.
Earlier I was facing promise issue , for that I installed pollyfill module.
require('es6-promise').polyfill();
I am running npm 1.3.10 version and Unfortunately, I can't upgrade it right now.
Also , followed alternative approach of using args.. still facing same error.
So after bit analysis, I am using below two grunt plugin's
1. grunt-closure-tools
2. google-closure-compiler
Its was a problem with legacy npm version.