I have a query which I am passing via the command line:
aws cloudsearchdomain --endpoint-url http://myendpt search --search-query value --return _all_fields --cursor initial --size 100 --query-options {"defaultOperator":"or","fields":["id"],"operators":["and","escape","fuzzy","near","not","or","phrase","precedence","prefix","whitespace"]} --query-parser simple --query-parser simple --profile myname
It responds with:
Unknown options: operators:[and, escape, fuzzy, near, not, or, phrase, precedence, prefix, whitespace], fields:[id]
I assure you that id field exists in AWS Cloudsearch. I reverse engineered the query in the online cloudsearch query tester to AWS CLI.
Please help.
Update:
This problem has been resolved in the updated aws-cli/1.8.4. If you are a ubuntu/linux user like me:
please do:
sudo pip uninstall awscli
sudo pip install awscli
aws --version
The solution for my ruby implementation of the aws-sdk, ver > 2
client = Aws::CloudSearchDomain::Client.new(endpoint:'http://yoururl')
resp = client.search({
cursor:"initial",
facet:"{\"facet_name_!\":{},\"mentions\":{}}",
query:"#{place_a_value_here}",
query_options:"{\"defaultOperator\":\"or\",\"fields\":[\"yourfield\"],\"operators\":[\"and\",\"escape\",\"fuzzy\",\"near\",\"not\",\"or\",\"phrase\",\"precedence\",\"prefix\",\"whitespace\"]}",
query_parser:"simple",
return:"_all_fields",
size:1000,
highlight:"{\"text\":{}}",
})
Summarizing the Asker's solution from the comments: the issue is that you have to double-quote your json param, and then either single-quote (') or escaped-double-quote (\") the json key/values within your param.
For example, both of these are valid
--query-options "{'defaultOperator':'and','fields':['name']}"
or
--query-options "{\"defaultOperator\":\"and\",\"fields\":[\"name\"]}"
Related
I have a function in my Lambda named my-s3-function. I need to add this dependency to my Lambda Node.JS. I have followed this part to update the script with dependency included (though, I didn't follow the step wherein I need to zip the folder using zip -r function.zip . but instead I zip the folder by right-clicking it on my PC).
The zip file's structured like this inside:
|node_modules
|<folders>
|<folders>
|<folders>
... // the list goes on
|index.js
|package_lock.json
Upon typing the code aws lambda update-function-code --function-name my-s3-function --zip-file fileb://function.zip to the terminal, I get the following response:
An error occurred (MissingAuthenticationTokenException) when calling the UpdateFunctionCode operation: Missing Authentication Token
What should I do to resolve this?
Based on the comments , this got resolved by configuring the credentials as described in the documentation.
Try first with exporting the credentials as described Environment variables to configure the AWS CLI. Once you are sure your credentials are correct then you can follow this Configuration and credential file
When I run the function locally on NodeJS 11.7.0 it works, when I run it in AWS Lambda NodeJS 8.10 it works, but I've recently tried to run it in AWS Lambda NodeJS 10.x and get this response and this error in Cloud Watch.
Any thoughts on how to correct this?
Response
{
"success": false,
"error": "Error: Could not find openssl on your system on this path: openssl"
}
Cloudwatch Error
ERROR (node:8) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
Function
...
const util = require('util');
const pem = require('pem');
...
return new Promise((fulfill) => {
require('./certs').get(req, res, () => {
return fulfill();
});
}).then(() => {
const createCSR = util.promisify(pem.createCSR);
//This seems to be where the issue is coming from
return createCSR({
keyBitsize: 1024,
hash: HASH,
commonName: id.toString(),
country: 'US',
state: 'Maryland',
organization: 'ABC', //Obfuscated
organizationUnit: 'XYZ', //Obfuscated
});
}).then(({ csr, clientKey }) => {
...
}).then(async ({ certificate, clientKey }) => {
...
}, (err) => {
return res.status(404).json({
success: false,
error: err,
});
});
...
I've tried with
"pem": "^1.14.3", and "pem": "^1.14.2",
I tried the answer documented by #Kris White, but I was not able to get it to work. Each execution resulted in the error Could not find openssl on your system on this path: /opt/openssl. I tried several different paths and approaches, but none worked well. It's entirely possible that I simply didn't copy the OpenSSL executable correctly.
Since I needed a working solution, I used the answer provided by #Wilfred Dittmer. I modified it slightly since I wasn't using Docker. I launched an Amazon Linux 2 server, built OpenSSL on it, transferred the package to my local machine, and deployed it via Serverless.
Create a file named create-openssl-zip.sh with the following contents. The script will create the Lambda Layer OpenSSL package.
#!/bin/bash -x
# This file should be copied to and run inside the /tmp folder
yum update -y
yum install autoconf bison gcc gcc-c++ libcurl-devel libxml2-devel -y
curl -sL http://www.openssl.org/source/openssl-1.1.1d.tar.gz | tar -xvz
cd openssl-1.1.1d
./config --prefix=/tmp/nodejs/openssl --openssldir=/tmp/nodejs/openssl && make && make install
cd /tmp
rm -rf nodejs/openssl/share nodejs/openssl/include
zip -r lambda-layer-openssl.zip nodejs
rm -rf nodejs openssl-1.1.1d
Then, follow these steps:
Open a terminal session in this project's root folder.
Run the following command to upload the Linux bash script.
curl -F "file=#create-openssl-zip.sh" https://file.io
Note: The command above uses the popular tool File.io to copy the script to the cloud temporarily so it can be securely retrieved from the build server.
Note: If curl is not installed on your dev machine, you can also upload the script manually using the File.io website.
Copy the URL for the uploaded file from either the terminal session or the File.io website.
Note: The url will look similar to this example: https://file.io/a1B2c3
Open the AWS Console to the EC2 Instances list.
Launch a new instance with these attributes:
AMI: Amazon Linux 2 AMI (HVM), SSD Volume Type (id: ami-0a887e401f7654935)
Instance Type: t2.micro
Instance Details: (use all defaults)
Storage: (use all defaults)
Tags: Name - 'build-lambda-layer-openssl'
Security Group: 'Create new security group' (use all defaults to ensure Instance will be publicly accessible via SSH over the internet)
When launching the instance and selecting a key pair, be sure to choose a Key Pair from the list to which you have access.
Launch the instance and wait for it to be accessible.
Once the instance is running, use an SSH Client to connect to the instance.
More details on how to open an SSH connection can be found here.
In the SSH terminal session, navigate to the tmp directory by running cd /tmp.
Download the bash script uploaded earlier by running curl {FILE_IO_URL} --output create-openssl-zip.sh.
Note: In the script above, replace FILE_IO_URL with the URL returned from File.io and copied in step 3.
Execute the bash script by running sudo bash ./create-openssl-zip.sh. The script may take a while to complete. You may need to confirm one or more package install prompts.
When the script completes, run the following command to upload the package to File.io: curl -F "file=#lambda-layer-openssl.zip" https://file.io.
Copy the URL for the uploaded file from the terminal session.
In the terminal session on the local development machine, run the following command to download the file: curl {FILE_IO_URL} --output lambda-layer-openssl.zip.
Note: In the script above, replace FILE_IO_URL with the URL returned from File.io and copied in step 13.
Note: If curl is not installed on your dev machine, you can also download the file manually by pasting the copied URL in the address bar of your favorite browser.
Close the SSH session.
In the EC2 Instances list, terminate the build-lambda-layer-openssl EC2 instance since it is not needed any longer.
The OpenSSL Lambda Layer is now ready to be deployed.
For completeness, here is a portion of my serverless.yml file:
functions:
functionName:
# ...
layers:
- { Ref: OpensslLambdaLayer }
layers:
openssl:
name: ${self:provider.stage}-openssl
description: Contains openssl command line utility for lambdas that need it
package:
artifact: 'path\to\lambda-layer-openssl.zip'
compatibleRuntimes:
- nodejs10.x
- nodejs12.x
retain: false
...and here is how I configured PEM in the code file:
import * as pem from 'pem';
process.env.LD_LIBRARY_PATH = '/opt/nodejs/openssl/lib';
pem.config({
pathOpenSSL: '/opt/nodejs/openssl/bin/openssl',
});
// other code...
I contacted AWS Support about this and it turns out that the openssl library is still on the Node10x image, just not the command line utility. However, it's pretty easy to just grab it off a standard AMI and use it as a Lambda layer.
Steps:
Launch an Amazon Linux 2 AMI as an EC2
SSH into the box, or use an SFTP utility to connect to the box
Copy the command line utility for openssl at /usr/bin/openssl somewhere you can work with it locally. In my case I downloaded it to my Mac even though it is a Linux file.
Verify that it's still marked as executable (chmod a+x openssl if necessary if you've downloaded it elsewhere)
Zip up the file
Optional: Upload it to an S3 bucket you can get to
Go to Lambda Layers in the AWS console
Create a new lambda layer. I named mine openssl and used the S3 pointer to the file on S3. You can also upload the zip directly if you have it on a local file system.
Attach the arn provided for the layer to your Lambda function. I use serverless so it was defined in the function setup per their documentation.
In your code, reference openssl as /opt/openssl or you can avoid pathing it in your code (or may not have an option if it's a package you don't control) by adding /opt to you path, i.e.
process.env['PATH'] = process.env['PATH'] + ':' + process.env['LAMBDA_TASK_ROOT'] + ':/opt';
The layer will have been unzipped for you and because you set it to be executable beforehand, it should just work. The underlying openssl libraries are there, so just copying the cli works just fine.
What you can do is to create a lambda layer with the openssl library.
Using the lambdaci/lambda:build-nodejs10.x you can compile the openssl library and create a zip file from the install. The zip file you can then use as a layer for your lambda.
Create a file called create-openssl-zip.sh and make sure to chmod u+x it.
#!/bin/bash -x
# This file should be run inside the lambci/lambda:build-nodejs10.x container
yum update -y
yum install autoconf bison gcc gcc-c++ libcurl-devel libxml2-devel -y
curl -sL http://www.openssl.org/source/openssl-1.1.1d.tar.gz | tar -xvz
cd openssl-1.1.1d
./config --prefix=/var/task/nodejs/openssl --openssldir=/var/task/nodejs/openssl && make && make install
cd /var/task/
rm -rf nodejs/openssl/share
rm -rf nodejs/openssl/include
zip -r lambda-openssl-layer.zip nodejs
cp lambda-openssl-layer.zip /opt/layer/
Then run:
docker run -it -v `pwd`:/opt/layer lambci/lambda:build-nodejs10.x /opt/layer/create-openssl-zip.sh
This will run the script inside the docker container and when it is done you have a file called lambda-openssl-layer.zip in your current directory.
Upload this lambda to an s3 bucket and create a lambda layer.
On your original lambda, add this layer and modify your code so that the PEM library knows where to look for the OpenSSL library as follows:
PEM.config({
pathOpenSSL: '/opt/nodejs/openssl/bin/openssl'
})
And finally add an extra environment variable to your lambda called LD_LIBRARY_PATH with value /opt/nodejs/openssl/lib
Otherwise it will fail with:
/opt/nodejs/openssl/bin/openssl: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory
PEM NPM docs says:
Setting openssl location
In some systems the openssl executable might not be available by the default name or it is not included in $PATH. In this case you can define the location of the executable yourself as a one time action after you have loaded the pem module:
So I think it is not able to find OpenSSL path in system you can try configuring it programmatically :
var pem = require('pem')
pem.config({
pathOpenSSL: '/usr/local/bin/openssl'
})
As you are using AWS Lambda so just try printing process.env.path you will get idea of whether OpenSSL is included in path env variable or not.
You can also check 'OpenSSL' by running below code
const exec = require('child_process').exec;
exec('which openssl',function(err,stdopt,stderr){
console.log(err ? err : stdopt);
})
UPDATE
As #hoangdv mentioned in his answer openssl is seems to be removed for node10.x runtime and I think he is right. Also, we have read-only access to file system so we can't do much.
#Seth McClaine, you can give try for node-forge npm module. One of the module built on top of this is 'https://github.com/jfromaniello/selfsigned' which will make your task easier
https://github.com/lambci/git-lambda-layer/issues/13#issue-444697784 (announcement email)
It seem openssl has been removed in nodejs10.x runtime.
I have checked again on lambci/lambda:build-nodejs10.x docker image and confirmed that. Maybe, you need to change your runtime version or find another way to createCSR.
which: no openssl in (/var/lang/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin)
Does anyone have an example of how to create a dns entry, for a lightsail hosted domain, using the aws cli?
I haven't been able to find an example of the format for the --domain-entry parameter of the create-domain-entry sub-command.
I made use of Mike's syntax to create a TXT record for DMARC. (Thank you Mike!)
I'd been trying to create it in the UI. I kept getting this error: Input error: Target should be enclosed in quotation marks: ""v=DMARC1; p=none; rua="mailto:dmarc#YOURDOMAINNAME.com"".
After trying several times with different recommended quote configurations, I bailed on the UI, and used Mike's syntax in a bash script. In my case, I also removed the extra quotes I had around the email address inside the rua portion. This may have been the source of my errors in the UI.
Here's what successfully created the DMARC record for me:
#!/usr/bin/bash
aws lightsail --region us-east-1 \
create-domain-entry \
--domain-name 'YOURDOMAINNAME.com' \
--domain-entry '{"name":"_dmarc.YOURDOMAINNAME.com","target":"\"v=DMARC1; p=none; rua=mailto:dmarcreports#YOURDOMAINNAME.com\"","isAlias":false,"type":"TXT"}'
Of course, replace YOURDOMAINNAME with your domain name, and the mailto name with the email at which you want to receive DMarc reports.
The command below will create an A record using the CLI
aws lightsail create-domain-entry \
--domain-name mikegcoleman.com \
--region us-east-1 --domain-entry \
name=blog.mikegcoleman.com,target=52.40.235.176,isAlias=false,type=A
Note that you need to specify the region as all domain actions with the Lightsail CLI need to be performed against us-east-1
For a TXT record the following should work. I think there is some funkiness with the CLI that it doesn't like the inline domain entry, and needs the JSON to do the TXT record, so it's formatted difrerently from above:
aws lightsail --region us-east-1 \
create-domain-entry \
--domain-name 'mikegcoleman.com' \
--domain-entry '{"name":"test.mikegcoleman.com","target":"\"response\"","isAlias":false,"type":"TXT"}'
Yes!
The answer from #binarybelle to create a BASH script and add the command as the JSON version worked for me too in order to add a TXT entry for DKIM.
The extra trick with a long DKIM entry is to split the text key into 2 parts, so lots of escaping the extra double-quotes :-)
#!/bin/bash
/usr/local/bin/aws lightsail --region us-east-1 \
create-domain-entry --domain-name 'mydomain.co.uk' \
--domain-entry '{"name":"default._domainkey.mydomain.co.uk","target":"\"v=DKIM1; h=sha256; k=rsa; \" \"p=MIIBIjxxxxxxxxxxxiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAurVgfLc8xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx9cRHBTEOIR4lmIgatpit\" \"t+v7oQzngmfKpBNoTeyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxQIDAQAB\"","isAlias":false,"type":"TXT"}'
I am trying to load data from BigQuery to Jupyter Notebook, where I will do some manipulation and plotting. The datasets is 25 millions rows with 10 columns, which definitely exceeds my machine's memory capacity(16 GB).
I have read this post about using HDFStore, but the problem here is that I still need to read the data to Jupyter Notebook to do the manipulation.
I am using Google Cloud Platform, so setting a huge cluster in Dataproc might be an option, though that could be costly.
Anyone gets similar issue and has a solution?
Concerning products within Google Cloud Platform you can create a Datalab instance to run your notebooks and specify the desired machine type with the --machine-type flag (docs). You can use a high-memory machine if needed.
Of course, you can also use Dataproc as you already proposed. For easier setup you can use the predefined initialization action with the following parameter upon cluster creation:
--initialization-actions gs://dataproc-initialization-actions/datalab/datalab.sh
Edit
As you are using a GCE instance, you can also use a script to autoshutdown the VM when you are not using it. You can edit ~/.bash_logout so that it checks if it's the last session and, if so, stops the VM
if [ $(who|wc -l) == 1 ];
then
gcloud compute instances stop $(hostname) --zone $(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/zone 2>\dev\null | cut -d/ -f4) --quiet
fi
Or, if you prefer a curl approach:
curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" https://www.googleapis.com/compute/v1/projects/$(gcloud config get-value project 2>\dev\null)/zones/$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/zone 2>\dev\null | cut -d/ -f4)/instances/$(hostname)/stop -d ""
Keep in mind that you might need to update Cloud SDK components to get the gcloud command to work. Either use:
gcloud components update
or
sudo apt-get update && sudo apt-get --only-upgrade install kubectl google-cloud-sdk google-cloud-sdk-datastore-emulator google-cloud-sdk-pubsub-emulator google-cloud-sdk-app-engine-go google-cloud-sdk-app-engine-java google-cloud-sdk-app-engine-python google-cloud-sdk-cbt google-cloud-sdk-bigtable-emulator google-cloud-sdk-datalab -y
You can include one of these and ~/.bash_logout edits in your startup-script.
I am running following AWS CLI command on Windows powershell. It is reporting that I have not specified ParameterValue for ParameterKey KeyName but I have. Why is this command isn't working?
PS C:\Users\Manu> aws cloudformation create-stack --stack-name vpn --template-url https://s3.amazonaws.com/awsinaction/chapter5/vpn-cloudformation.json --para
meters ParameterKey=KeyName, ParameterValue=mykey ParameterKey=VPC, ParameterValue=$VpcId ParameterKey=Subnet, ParameterValue=$SubnetId ParameterKey=IPSecShar
edSecret, ParameterValue=$SharedSecret ParameterKey=VPNUser, ParameterValue=vpn ParameterKey=VPNPassword, ParameterValue=$Password
An error occurred (ValidationError) when calling the CreateStack operation: ParameterValue for ParameterKey KeyName is required
Powershell is having difficulty parsing that comma and thereby loses the ParameterValue afterwards. Wrap the complete section after --parameter in double-quotes, so that it would be able to resolve it.
I would still suggest you to use AWS Tools for Powershell which is far more easier to deal all these.
Hope it helps.