I am trying to run the sample AWS-Lex-Web-UI from the https://github.com/awslabs/aws-lex-web-ui#sample-site
As per the directions i am able to create the cognito pool id and also saved in the chatbot-ui-loader-config.json and tried with npm start. Server started at localhost:8000 but i am not able to run any one of the bot command.
Does anybody already implemented in the WEB-UI part using the sample example. I want to export my bot from AWS-LEX to any one of the local server.
chatbot-ui-loader-config.json:
{
"cognito": {
"poolId": "us-east-1:b3bxxxx-xxxx-45c7-xxxx-9xxxxxxxx"
},
"lex": {
"botName": "DataBot",
"initialText": "You can ask me for help rendering a file. Just type \"Render File\" or click on the mic and say it.",
"initialSpeechInstruction": "Say 'Render a file' to get started."
},
"polly": {
"voiceId": "Salli"
},
"ui": {
"parentOrigin": "",
"toolbarTitle": "File Processor"
},
"recorder": {
"preset": "speech_recognition"
}
}
Checkout the browser console for any errors. It helped me while I was trying this one out.
Here are some of the things that I experienced before I was able to try this out locally:
IAM permissions should be properly set such as cognito pools should have access to the Lex or that Polly should be able to access Lex.
Federated identities versus User Pools - I had to use Federated Identity pool.
I had the same issue. I followed this guide to solve my problem.
This issue is more related to setting proper permission for Amazon Cognito Pools. It can be checked from browser console as pointed out in the above answer.
The above link provide step by step guide.
Related
I have to implement authorization to access the etherpad UI so that it could not be public url.
For this, when i set the setting "requireAuthentication": true, then it throws web authentication throw browser as below
But In the application, when i access etherpad UI through iframe then it also shows authentication pop-up as above. Please suggest how i can make break through to access etherpad UI without auth pop-up in the application, But allow auth popup when it access from web browser instead of application ?
OR any other way also appreciated.
Just posting here because google searches for "etherpad basic authentication" led me here.
This solution only applies to etherpad-lite via Docker
I had been wanting to enable some basic authentication as well without using LDAP or some plugin.
Checkout the etherpad-lite Git project
git clone https://github.com/ether/etherpad-lite.git
Edit the settings.json.docker
-Made 1 change to the file by setting requireAuthentication to true
-Took note of those 2 variable names (ADMIN_PASSWORD,USER_PASSWORD)
.
.
"requireAuthentication": true
.
.
"users": {
"admin": {
// 1) "password" can be replaced with "hash" if you install ep_hash_auth
// 2) please note that if password is null, the user will not be created
"password": "${ADMIN_PASSWORD:null}",
"is_admin": true
},
"user": {
// 1) "password" can be replaced with "hash" if you install ep_hash_auth
// 2) please note that if password is null, the user will not be created
"password": "${USER_PASSWORD:null}",
"is_admin": false
}
},
*all the other stuff can be left alone so I left out of this snippet
Create a custom image of the etherpad-lite image
docker build --tag myetherpad .
Spin up your new etherpad instance and pass in those 2 variables
ADMIN_PASSWORD: "someAdminPassword"
USER_PASSWORD: "someUserPassword"
*I am using docker-compose so setting those variables will look a little different in vanilla Docker or K8
**There are definitely better ways to deliver authentication in etherpad-lite but I just needed a quick instance. This process would be very tedious if you were going to have more than a few users
First time posting a question so if I am not explaining properly please let me know. I am still very new to AWS and trying my best to learn.
MAIN QUESTION: What is the simplest way for me to test that the following setup is working as intended?
I was working with AWS DynamoDB trying to follow this idea:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/specifying-conditions.html
Where each UserId will be their partition key and they will only be able to read, write and delete information on their specific row/items.
I first create a table using the same name GameScores
dynamodb table image
I also create a user pool called "gamers" with all default setting.
enter image description here
I create a policy using the policy they have on the documention and call it "dynmodbgametable" the only thing I changed was the "Resource" to match the ARN of the dynamoDB "GameScores" I just created.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessToOnlyItemsMatchingUserID",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:BatchWriteItem"
],
"Resource": [
"arn:aws:dynamodb:us-..rest of arn../GameScores"
],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"${www.amazon.com:user_id}"
],
"dynamodb:Attributes": [
"UserId",
"GameTitle",
"Wins",
"Losses",
"TopScore",
"TopScoreDateTime"
]
},
"StringEqualsIfExists": {
"dynamodb:Select": "SPECIFIC_ATTRIBUTES"
}
}
}
]
}
I create a role. clicking Web identity for type of trusted entity and for the Choose a web identity provider I select Amazon Cognito and Identity Pool ID as the pool id from user pool "gamers" Pool id and then attach the policy I just created called "dynmodbgametable". I call the role "GameRole"
enter image description here
I go ahead and create two users in the "gamer" user pool.
At this point I don't know what I am suppose to do to test it to see if I have even followed the intructions propertly. I started setting up this Nodejs script to test and it works of putting stuff and getting stuff from the database, but I know it is using my default root creditials that are saved on my local machine. I think I am suppose to setup the "AWS.config.credentials" to something that would include the userpool and put in one of the usernames with their associated password. But I haven't had much luck figuring out how exactly I am suppose to do that. Was it nesscessary to to create a client app for the "gamers" user pool as well before this will work?
Here is the little script I was trying if that somehow helps.
var AWS = require("aws-sdk");
AWS.config.update({ region: "us-east-2" });
var ddb = new AWS.DynamoDB({ apiVersion: "2012-08-10" });
var params = {
TableName: "GameScores",
Item: {
UserId: { S: "user id" },
GameTitle: { S: "hobo" },
},
};
ddb.putItem(params, function (err, data) {
if (err) {
console.log("Error", err);
} else {
console.log("Success", data);
}
});
I don't really know how to obtain "${www.amazon.com:user_id}" and where or how to pass it to and from. Is there some endpoint on the database itself? Am I suppose to create some kind of endpoint to point to? I just know that this is the variable that is suppose to determine the partition key.
If I can figure out how to test that it is working, I feel some of this will click for me. Right now I feel like I am not quite understanding what is going on conceptually. All the YouTube videos, documents and other Stack overflow posts I have read online only seem to talk about this on a higher level or are not within the scope of what I am trying to do.
Thanks for any help that can be provided! I will be sure to edit this if something is missing.
EXTRA INFO PROBABLY NOT NEEDED: I currently have an AWS Amplify web application that has a working interface that has working authentication with a user pool. I would like to add this ability of fine grained access control so that when a user logs in, they would have access to edit their profile information (name, age etc) and not be able to view other profiles information. If I can get a working prototype of this fine grained access control stuff, I should be able to figure out how to get it working for my Amplify application.
For anyone that happens to stumble onto my post, I ended up going a slightly different route. It may not be useful for you but it is what solved my problem.
Because I was using AWS Amplify, I reached out to their discord (shout out to undef_obj for answering me!) he said the following:
looking at your link, you're attempting to leverage the IAM policy variables for Cognito Identity and craft your own access control matrix solution. While this is possible, it's going to be a lot of effort and testing with potential for security issues if something is implemented wrong. Assistance with that is outside the scope of the Amplify framework. However, if you're looking for fine grained authorization with Amplify this is built into the GraphQL Transformer #auth directive and I'd recommend looking at that. There are plenty of examples showing how to setup a React app to an Amplify GraphQL endpoint which uses AWS AppSync and DynamoDB as the backing store.
So I looked into this and found that using AWS AppSync worked for me!
I went to THIS LINK and followed some of the instructions there. Specifically:
Amplify add api
selected: GraphQl
authroization type: Amazon Cognito User Pool
(I already had user pool added to the project so it skipped the process of making a new user pool)
I kept choosing the defaults until "Choose a schema template"
I picked "Objects with fine-grained access control (e.g., a proj
ect management app with owner-based authorization)"
From there it setup a sample project I could start learning GraphQL from and how to implement the fine-grained access control. Using the code from the getPrivateNote resolver was probably the most useful thing. I also used this appsync starter application to figure out how to interact with GraphQL from my react client. This whole process took me HOURS AND HOURS to figure out, and currently I am still trying to fully understand how it all works, but so far this AppSync GraphQL seems to be the best for my scenario. The built in query system that AppSync has made it easier to test access control (i.e login with one user and see if I only had access to my own items)
Here is what my reactjs code ended up looking like for the client side:
import { API, graphqlOperation } from 'aws-amplify';
import QueryUserInfo from './graphql/QueryUserInfo';
...
getRequest = (evt) => {
return new Promise((resolve, reject) =>
{
API.graphql(graphqlOperation(QueryUserInfo))
.then((data) => {
if(data) {
console.log(data);
resolve(data);
} else {
console.log(data);
resolve(null);
}
})
.catch((err) => {
console.log(err);
resolve(null);
});
});
}
This is what the actual QueryUserInfo.js file looked like:
import gql from "graphql-tag";
export default gql(`
query QueryName {
getUser(id: "c35...rest of cognito user id...69") {
id
email
name
}
}`);
The resolver code is too long to post, but I just used the template code from Amplify and I think I only had to change #set( $allowedOwners0 = $util.defaultIfNull($ctx.result.owner, []) )
to #set( $allowedOwners0 = $util.defaultIfNull($ctx.result.id, []) )
since "id" was what I was using on my dynamoDB table, not "owner". Good luck to anyone reading this!
I have a web application and I want to track its crashing reports.
Can I use Firebase crashlytics or Fabric for this purpose. In their site its mentioned its only for Android or ios.
Regards,
Makrand
There is feature request: https://github.com/firebase/firebase-js-sdk/issues/710
Looks like it's not supported at all, fabric didn't supported crashlytics on web either so it looks like there are maybe some alternatives like https://www.bugsnag.com but I would like to have it too in one place. Don't see difference between web, android or iOS clients at all, don't know why this is not supported.
But for some possible solution for Vue framework is to catch errors and send it to google analytics where you can connect also your firebase mobile apps. I think to try it this way for now. I didnt tested it yet so don't know if I have to catch window errors too.
Vue.config.errorHandler = function (error) {
//Toast.error(error.message)
console.warn(error.message)
//send error as event to google analytcs...
if (error) message = error.stack;
ga('send', 'event', 'Vue.config.errorHandler', message, navigator.userAgent);
}
window.onerror = function(message, source, lineno, colno, error) {
// maybe we need to also catch errors here and send to GA
}
But I found something like this too for typescript https://github.com/enkot/catch-decorator
While there is still no firebase crashlytics for web, google offers Stackdriver with error reporting functionality - it keeps track of all errors with ability to mark them as resolved (it can also send email notifications about new errors):
You can access it using the below url (make sure to put your firebase {project_id} in the link before clicking it):
https://console.cloud.google.com/errors?project={project_id}
There are two ways on how to use it:
Easy way, limited flexibility.
Every console.error(new Error(...)) reported from your firebase function is automatically tracked in the Stackdriver error logging platform.
So you just need to send an error report from your web app to your firebase function and log it using console.error inside that function.
Note, only an instances of Error object will be sent to the Stackdriver platform. For example console.error("{field1: 'text'}") won't be sent to Stackdriver. More info on that in this doc
More comprehensive way that provides an additional control (you can also report userId, your custom platform name, it's version, user agent, etc):
Here is a quick snippet on how it can be used (in our case we first send the error log from web app to our server and then report the error to Stackdriver):
in firebase nodejs:
const {ErrorReporting} = require('#google-cloud/error-reporting');
let serviceAccount = {...} //service account is your firebase credetials that holds your secret keys etc. See below for more details.
let config = {
projectId: serviceAccount.project_id,
reportMode: "always",
credentials: serviceAccount
}
let errors = new ErrorReporting(config);
Report error to Stackdriver from nodejs:
async function reportError(message){
//message is a string that contains the error name with an optional
//stacktrace as a string representing each stack frame separated using "\n".
//For example:
//message = "Error: Oh-hoh\n at MyClass.myMethod (filename.js:12:23)\n etc.etc."
const errorEvent = this.errors.event()
.setMessage(message)
.setUser(userId)
.setServiceContext("web-app", "1.0.0")
await errors.report(errorEvent)
}
More info about the Stackdriver library is available in this doc. And more info about the stacktrace and it's format can be found in the docs here
A few notes on setting it up:
You need to enable two things:
Enable Stackdrive api for your project using the link below (make sure to set your firebase {project_id} in the url below before clicking it)
https://console.developers.google.com/apis/library/clouderrorreporting.googleapis.com?project={project_id}
Make sure to also grant "Error writer" permission to the firebase service account so Stackdriver can receive the error logs (service account is a sort of representation of a "user" for your firebase project who accesses the services)
To grant the premission, follow the below steps:
first locate the "Firebase service account" using your firebase dashboard link (you can find it below) and remember it's value - it looks something like firebase-adminsdk-{random_symbols}#{project_id}.iam.gserviceaccount.com
Then open gcloud console under "Access"->"IAM". Or use the following link:
https://console.cloud.google.com/access/iam?project={project_id} <- put your firebase project id here
Locate your Firebase service account from the step 1.
Press edit for that account and add "Errors writer" permission:
Where to find the serviceAccount.json:
Regarding the serviceAccount - this is a universal credentials that can be used to authenticate many google services including the Stackdriver. You can obtain yours from your firebase dashboard using the url below (just put your firebase project_id in the link before using it):
https://console.firebase.google.com/u/0/project/{project_id}/settings/serviceaccounts/adminsdk
Open it and click "generate new credentials". This will generate a new service account and download the serviceAccount.json that you need to keep safe (you won't be able to get it again unless you generate a new one).
Apparently Sentry now supports several web frameworks out of the box.
I have recently integrated Sentry crashlytics for Django App.
see here:
https://sentry.io/platforms/
I'm using googleapis npm package ("apis/drive/v3.js") for Google Drive service. On backend I'm using NodeJS and ngrok for local testing. My problem is that I can't get notifications.
The following code:
drive.changes.watch({
pageToken: startPageToken,
resource: {
id: uuid.v1(),
type: 'web_hook',
address: 'https://7def94f6.ngrok.io/notifications'
}
}, function(err, result) {
console.log(result)
});
returns some like:
{
kind: 'api#channel',
id: '8c9d74f0-fe7b-11e5-a764-fd0d7465593e',
resourceId: '9amJTbMCYabCkFvn8ssPrtzWvAM',
resourceUri: 'https://www.googleapis.com/drive/v3/changes?includeRemoved=true&pageSize=100&pageToken=6051&restrictToMyDrive=false&spaces=drive&alt=json',
expiration: '1460227829000'
}
When I try to change any files in Google Drive, the notifications do not comes. Dear colleges, what is wrong?
This should be a comment but i do not have enough (50points) experience to post one. Sorry if this is not a real answer but might help.
I learned this today. I'm doing practically the same thing like you - only not with Drive but Gmail api.
I see you have this error:
"push.webhookUrlUnauthorized", "message": "Unauthorized WebHook etc..."
I think this is because one of the 2 reasons:
you didn't give the Drive-api publisher permissions to your topic.
Second if you want to receive notifications, the authorized webHooks Url must be set both on the server( your project) and in your pub/sub service(Google Cloud).
See below - for me this setup works:
1. Create a topic
2. Give the Drive publish permissions to your topic. This is done by adding the Drive scope in the box and following steps 2 and 3.
3. Configure authorized WebHooks. Form the Create Topic page - click on add subscriptions. Not rely vizible here but once you are there you can manage.
I'm writing a simple SNS client that is meant to subscribe itself to an SNS topic and then listen for notifications. I can successfully submit a sns.subscribe request, but when I pick up the SubscriptionConfirmation POST message from AWS and try and respond using sns.confirmSubscription I get an AuthorizationError returned:
[AuthorizationError: User: arn:aws:iam::xxx:user/mv-user is not authorized to perform: SNS:ConfirmSubscription on resource: arn:aws:sns:us-east-1:xxx:*]
If I use exactly the same Token and TopicArn in a GET query to the server the subscription confirmation works fine, with no authentication.
Any ideas why it's not working? My SNS topic is wide open with publish/subscribe permissions set to 'Everyone'.
For reference, my code is something like this:
var params = {
TopicArn: topicArn, // e.g. arn:aws:sns:us-east-1:xxx:yyy
Token: token // long token extracted from POST body
};
sns.confirmSubscription(params, function (err, data) {
if (err) {
// BOOOM - keep getting here with AuthorizationError
} else {
// Yay. Worked, but never seem to get here :(
}
});
However, if I navigate to the URL similar to this in a browser (i.e. completely unauthenticated), it works perfectly:
http://sns.us-east-1.amazonaws.com/?Action=ConfirmSubscription&Token=<token>&TopicArn=arn%3Aaws%3Asns%3Aus-east-1%3Axxx%3Ayyy&Version=2010-03-31
The only differences seem to be the inclusion of 'Authorization' and 'Signature' headers in the programmatic version (checked using Wireshark).
Any ideas? Thanks in advance!
Update
In my code, if I just programatically do a simple GET request to the SubscribeURL in the SubscriptionConfirmation message this works fine. Just seems odd that the confirmSubscription API call doesn't work. Will probably stick to this workaround for now.
Update 2
Also get the same error when calling sns.unsubscribe although, again, calling the UnsubscribeURL in each notification works. Seems other people have run into that issue too but can't find any solutions.
I faced a similar issue while developing my application.
The way I ended up solving it is the following:
go to IAM and click on your user
go to the permissions tab and click on "Attach Policy"
use the filter to filter for "AmazonSNSFullAccess"
Attach the above policy to your user.
The above should take care of it.
If you wanna be fancy you can create a custom policy that is based on "AmazonSNSFullAccess" and apply it to you user instead.
The custom policy would be something similar to the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"sns:ConfirmSubscription"
],
"Effect": "Allow",
"Resource": "YOUR_RESOURCE_ARN_SHOULD_BE_HERE"
}
]
}
The error says it all:
[AuthorizationError: User: arn:aws:iam::xxx:user/mv-user is not authorized to perform: SNS:ConfirmSubscription on resource: arn:aws:sns:us-east-1:xxx:*]
is basically telling you that the IAM user you're using to call ConfirmSubscription doesn't have the proper permissions to do so. Best bet is to update the permissions for that IAM user, specifically adding ConfirmSubscription permissions.
(Based on your comments, even though the documentation says otherwise, the error is pretty specific... might be worth following up directly with AWS about this issue, since either the error message or documentation is incorrect).