provision users (with passwords) - jhipster

I want to provision/import a huge number of users, and generate them passwords, that I communicate to them via mobile phone/whatsapp in JHipster.
I've tried using org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder for this, but when I test this, I can call my service like this, I get 401 all the time, even though that worked before...
Is there a better way to import users? I only have an Excel/google docs sheet from where to import. This is my first time using JHipster, I've only used spring-boot for several years up to now, and am thinking that I might have to move "back", but am reluctant...

Related

Google Cloud Speech-to-Text non-intrusive end-user authentication

I am working on a desktop application using GCP Speech-to-Text API to perform streaming recognition. I'm using Python 3 and Google client libraries, google.cloud.speech. I've started off the transcribe streaming infnite Python sample and built around those concepts, and everything works nicely. I'm now trying to get to the end user access question, as I'm currently using a service account file for development purposes, which I'm obviously not willing to distribute to all users.
Basically, how do I give access to my end users to the Speech-to-Text service for streaming recognition in the least intrusive way possible? I have no need for accessing storage or the like, as I don't access buckets contents and stream all audio directly. I actually don't need any user information, I only need GCP to process the STT requests and send me the responses.
I see 2 solutions that should work on paper, out of the three the documentation mentions (I leave out the service account file ones):
API key
OAuth2
API key
API keys sound like my dream option: it's simple, doesn't require user interaction past initial setup, I can manage such keys in GCP's console, and it should be able to grant access to what I need (as I effectively don't need any user info, an account is mostly irrelevant).
However, how to use an API key using the Google Speech client library totally eludes me. I can see a PUB/SUB Go example, but I can't find any mapping to Python. I'm not even 100% sure it can work, as the Go documentation for the option seems to note it only works for JSON-over-HTTP, and I believe the client library for Speech-to-Text is using gRPC.
Yet, at least with a JSON non-streaming recognize request, I can use such an API key, and successfully did so manually using cURL on the command-line. So I still have a little bit of hope, in case the gRPC restriction either isn't true or doesn't concern my use case.
This part's question would summarize as: "how do I specify an API key using the Python SpeechClient?".
OAuth2
This sounds like my second-best option, as it uselessly asks the user for authentication while I don't actually need any personal data. Yet, I still have serious issues I'm struggling to overcome:
How do I reliably store info that prevent me from having to re-ask the user for authorization every single time the app runs? google_auth_oauthlib's InstalledAppFlow doesn't seem to provide such feature, so I'm rolling my own based on google.oauth2.credentials.Credentials.from_authorized_user_file() after having saved them with the to_json() the first time I obtained them with InstalledAppFlow. I'm however confident this will not last, and I'm really not sure how to check whether the credentials are still good before I fail to use an API with them (e.g. I can't seem to be able to rely on Credentials.valid before they actually get used).
There seem to be no specific Speech-to-Text scopes, and the required one is way broader than what I need, leading to an overly complex and frightening authorization request. And no, without this scope I cannot access the Speech-to-Text API, I tried :)
Summary
To summarize: what is my best and least-intrusive option to provide credentials to be able to use GCP Speech-to-Text in my desktop application?

Extend Dbus Service

The bigger goal:
Writing a batch user manager targeted at classroom school environments.
The problem
I want to write a user manager that uses a GUI to add, manage and delete users for classroom environments. The program I'm working on is ltsp-manager.
Up until now all the user management is done by executing bash commands. From a python script. Meaning all the GUI has to run as root and everything is handcrafted.
The goal
Create a Dbus service that handles all the account management and let the GUI run as a regular user requiring a password from time to time.
I looked around and found that in org.freedesktop.Accounts there is already a service doing a lot of the functionality I want to do. However, it also lacks some. Something that is totally missing is the management of Groups.
What is a good way to use the org.freedesktop.Accounts functionality and add some additional functions/methods?
Thoughts so far
Things that came to my mind include:
just redo everything - meaning a lot of duplicated work.
copy the interfaces and write functions that call the original ones
write a service that only implements the additional functions without touching the original ones. The client will then use the original service and the newly written one.
All my testing experiments are done with python3 and pydbus which seems to be the best choice among many.
I have never written a real world dbus service - though the experiments do show some results in d-feet. This question is not really a what do I need to type kind of question but rather a best practise question.
The best long-term answer would be to fix accountsservice upstream to implement groups support. There’s already work towards that; it just needs someone to pick it up and finish it off. accountsservice is the project which provides the canonical implementation of org.freedesktop.Accounts.
The other approaches are bad because:
just redo everything - meaning a lot of duplicated work.
As you say, this is a lot of duplicated work, and then you have to maintain it all.
copy the interfaces and write functions that call the original ones
That means you have to forever keep up with changes and additions to accountsservice.
write a service that only implements the additional functions without touching the original ones. The client will then use the original service and the newly written one.
That doesn’t come with any additional maintenance problems, but means your service won’t integrate well with accountsservice. There might be race conditions between updates on your D-Bus objects and updates on the accountsservice objects, for example. You won’t be able to share the maintenance burden of the groups code with the (many) other users of accountsservice.

How do I test restful APIs with constantly changing and random user defined data

I'm developing a clean-up API (github.com/Shadowys/btapi) for a Mediawiki application, Baka-Tsuki to pull meaningful data from the novel project pages like author, volume lists and cover images. The pages are user-defined and formatted in various ways decided by the translator(user). The pages are also updated and created daily, with the creation of new formats occurring sporadically. However, the API parser is able to handle most, if not all of the current pages, no matter their format.
Baka-Tsuki is not going to change into a database-based application in the near future, since the wiki is currently the most user friendly and cost-effective way to share translations, and we don't have enough developers to constantly work on a new application.
I'm looking into using mocha to automate testing of the API but as the input data constantly changes, testing is nearly impossible without checking every page available. I've looked at twitter and facebook testing methods but they have constantly formatted user input.
Is this case, which testing method should I refer to? Should I run the test simply based on the types returned, and the availability of the values returned or do I have to make a copy test-page to stimulate testing?

storing quick analytics using redis and node.js

I am new to redis and would like to store the web analytic of web site globally and per user activity .
Below is what i am stuck with.
// to get all unique ips
client.sadd('visitors',ip);
// to records hits per ip
client.hincrby('hits',ip,1);
The above so far works fine and i do get number of different ips and hit counter per ip.
the problem comes to store the activities made by each ip. i.e. Storing the link he clicked, searches he did, with datetime
Can some one please throw light on how to best manage it.
Thanks
the problem comes to store the activities made by each
You will need a separate structure for storing these.
The simplest rational structure is to have a "list of actions by session". Take a look at the sorted sets commands which provide a basic framework for creating a list of actions within a session.
This will get you something quickly. However, this is probably not what you really want. In fact redis is probably not useful for this at all.
If you want to re-trace an entire site visit you really want to connect to some sort of true analytics framework. There are dozens of website tracking tools that provide this type of functionality, so it's not really clear that building one is very efficient.

Automate the export of Facebook Insights data

I'm looking for a way of programmatically exporting Facebook insights data for my pages, in a way that I can automate it. Specifically, I'd like to create a scheduled task that runs daily, and that can save a CSV or Excel file of a page's insights data using a Facebook API. I would then have an ETL job that puts that data into a database.
I checked out the oData service for Excel, which appears to be broken. Does anyone know of a way to programmatically automate the export of insights data for Facebook pages?
It's possible and not too complicated once you know how to access the insights.
Here is how I proceed:
Login the user with the offline_access and read_insights.
read_insights allows me to access the insights for all the pages and applications the user is admin of.
offline_access gives me a permanent token that I can use to update the insights without having to wait for the user to login.
Retrieve the list of pages and applications the user is admin of, and store those in database.
When I want to get the insights for a page or application, I don't query FQL, I query the Graph API: First I calculate how many queries to graph.facebook.com/[object_id]/insights are necessary, according to the date range chosen. Then I generate a query to use with the Batch API (http://developers.facebook.com/docs/reference/api/batch/). That allows me to get all the data for all the available insights, for all the days in the date range, in only one query.
I parse the rather huge json object obtained (which weight a few Mb, be aware of that) and store everything in database.
Now that you have all the insights parsed and stored in database, you're just a few SQL queries away from manipulating the data the way you want, like displaying charts, or exporting in CSV or Excel format.
I have the code already made (and published as a temporarily free tool on www.social-insights.net), so exporting to excel would be quite fast and easy.
Let me know if I can help you with that.
It can be done before the week-end.
You would need to write something that uses the Insights part of the Facebook Graph API. I haven't seen something already written for this.
Check out http://megalytic.com. This is a service that exports FB Insights (along with Google Analytics, Twitter, and some others) to Excel.
A new tool is available: the Analytics Edge add-ins now have a Facebook connector that makes downloads a snap.
http://www.analyticsedge.com/facebook-connector/
There are a number of ways that you could do this. I would suggest your choice depends on two factors:
What is your level of coding skill?
How much data are you looking to move?
I can't answer 1 for you, but in your case you aren't moving that much data (in relative terms). I will still share three options of many.
HARD CODE IT
This would require a script that accesses Facebook's GraphAPI
AND a computer/server to process that request automatically.
I primarily use AWS and would suggest that you could launch an EC2
and have it scheduled to launch your script at X times. I haven't used AWS Pipeline, but I do know that it is designed in a way that you can have it run a script automatically as well... supposedly with a little less server know-how
USE THIRD PARTY ADD-ON
There are a lot of people who have similar data needs. It has led to a number of easy-to-use tools. I use Supermetrics Free to run occasional audits and make sure that our tools are running properly. Supermetrics is fast and has a really easy interface to access Facebooks API's and several others. I believe that you can also schedule refreshes and updates with it.
USE THIRD PARTY FULL-SERVICE ETL
There are also several services or freelancers that can set this up for you at little to no work on your own. Depending on where you want the data. Stitch is a service I have worked with on FB-ads. There might be better services, but it has fulfilled our needs for now.
MY SUGGESTION
You would probably be best served by using a third-party add-on like Supermetrics. It's fast and easy to use. The other methods might be more worth looking into if you had a lot more data to move, or needed it to be refreshed more often than daily.

Resources