Last year, I developed a simple Angular-Express-SQLite based application for a local Warehouse(Logistics Hub) which was used to keep track of daily incoming and outgoing trucks with information like their weight, origin, etc. and I deployed the app on an offline desktop.
Everything went well until I came to know that the computer operator of the Warehouse resold the app to other Warehouses for a decent amount.
Now, the first Warehouse owner has contacted me again with some changes in the app with some crucial inputs as per the new guidelines from the government, I'm looking for some solution to prevent the app from getting stolen again.
I'm looking for a solution that is as light in size as possible because of the type of desktops the app runs on are way too cheap(with lowest possible configuration), hence the choice of SQLite instead of some other Database providers.
The app should be deployed in such a manner that it can't be copy-pasted from one machine to other. It could be achieved by simple, not so secure methods too as the operators aren't that tech-savvy and only know the bare minimum knowledge about Computers, like copy-paste.
So I came across a low-level workaround since I'll be the one who'd set up the application on the client machine.
While initialising the application, an encrypted entry in the database could be set up with the name of the computer by reading the environment variable using express/node.
process.env.COMPUTERNAME
Or using some other system variable unique to that particular Computer and then comparing the stored value and current value on every login.
As Joachim already said in his comment, if the App has internet access, you can check for a license-key or something similar on a server.
Perhaps something like PKG can help, which compiles your sources into a single executable file. PKG on npmjs
Related
I'm working on a blockchain project where I'm implementing a wallet using Django. A user logs in and get's to generate an address. Now, I want to store the user's private key/public key pair in a file locally on the user's machine every time a user generates an address, and be able to read that file again in the next session (at the user's will). I'm doing this because the app itself is a supernode of the blockchain and all users are virtual nodes. All communication between users happen through the supernode, thus the wallet functionality isn't the core function of the app.
Everything is working perfectly except I can't find a way to create files locally on the client's machine. I've perused the Django documentation but I can't seem to find anything useful (maybe I'm not looking in the right place).
Is there a way I can achieve that?
Note: I'm trying as much as possible to avoid JavaScript, and I don't want users to download/upload files manually.
Saveing data/files on the clients machine is restricted but modern browser for good (security) reason.
However there are Cookies (https://www.w3schools.com/js/js_cookies.asp, https://docs.djangoproject.com/en/3.2/topics/http/sessions/) or html5 webstorage (https://www.w3schools.com/html/html5_webstorage.asp).
First of all, I'm not really sure if this question goes here in stackoverflow or if I should ask it on another place. Please if that's the case, indicate me in the right way :)
So, for context, this is an app that I was asked to develop for my job. At first I thought in doing a webapp and host it inside the company servers and domain (intranet), but it isn't possible due to external issues that I can't control.
Is there another way to achieve this? The app must have a database and should be accessible for a bunch of users at the same time.
Of course we want to spend the least amount of money possible to make this happen. Also, using a workstation of our own to host everything is not possible either.
Edit: I didn't finish developing, but for now I'm developing it in Python Flask.
The number of users is small really, just up to five people.
OK - I guess a lot of what you'll get in response to this is your definition is too vague. Things such as scale, number of users, programming languages used to create the web app etc are important when talking about hosting.
However, for me, there are three very good options out there for free hosting, up to a certain amount of traffic.
1.) Heroku - Heroku.com
A world known web hosting platform. You can publish code through GitHub, and it has some extensive coverage for different types of web apps. Definitely worth a look.
2.) Netlify - netlify.com
Similar to Heroku, but used by some major companies. Allows you to host for free to a point, and is relatively simple to get started with.
3.) Vercel - vercel.com
A bit more technical in my opinion - but again, very similar to the above two and has a free tier.
All three are great options, and I'd recommend looking into them in more detail to see what option is best for you. Can't go wrong with any of them.
I had a similar problem: A Python-Flask-SQLite app for me and my office pals to use together.
The solution was creating one .exe file with pyinstaller, hosting this and the database files in a network drive (one that everyone that will use the app has access). As everybody (~10 people) sees the same db, things works fine!
I am new to redis and would like to store the web analytic of web site globally and per user activity .
Below is what i am stuck with.
// to get all unique ips
client.sadd('visitors',ip);
// to records hits per ip
client.hincrby('hits',ip,1);
The above so far works fine and i do get number of different ips and hit counter per ip.
the problem comes to store the activities made by each ip. i.e. Storing the link he clicked, searches he did, with datetime
Can some one please throw light on how to best manage it.
Thanks
the problem comes to store the activities made by each
You will need a separate structure for storing these.
The simplest rational structure is to have a "list of actions by session". Take a look at the sorted sets commands which provide a basic framework for creating a list of actions within a session.
This will get you something quickly. However, this is probably not what you really want. In fact redis is probably not useful for this at all.
If you want to re-trace an entire site visit you really want to connect to some sort of true analytics framework. There are dozens of website tracking tools that provide this type of functionality, so it's not really clear that building one is very efficient.
I am about to begin a fairly simple application, but I want to make sure I structure the backend of the application correctly because I plan to expand on it greatly in the future. Here's my question:
I am creating both a Windows Phone 8 and Windows 8 Store application. In this case, it is a unit conversion application where the user is given the ability to define custom unit conversion units. I would like to allow the user to essentially sync those custom units between the two platforms so that they don't need to define them multiple times.
What backend approach should I take?
XML storage coupled with SkyDrive, Azure, a local database that syncs over USB....There are a lot of options, and I'm not sure which way is preferred in the scenario I described above. Any help or suggestions would be greatly appreciated.
As for actual data sharing I would suggest using Azure, which is a bit more reliable and also transparent for the user (as opposed to a local db syncing over USB) and cleaner than XML-files in SkyDrive (the user doesn't need to see these files anyway).
As for code sharing you could use two techniques:
Portable Class Libraries
Linked Files
I have recently written two articles on this:
http://www.kenneth-truyers.net/2013/03/27/portable-class-libraries-or-source-code-sharing/
http://www.kenneth-truyers.net/2013/02/24/patterns-for-sharing-code-in-windows-phone-and-windows-8-applications/
It doesn't actually have to be Azure, if you are shooting for lower price range. You can also choose a webhosting and use build a WebAPI service, which will help you sync your data and put them on all devices. Of course, Azure is being preferred as the ultimate solution, because it offers much more features.
I have used windows 8 roaming data support for one app. In my case, data is simply the history of user operations in the app and data size is < 1k. windows 8 roaming data support can support up to 100k of data as per documentation and is a good start for w8 apps with very low investment. it covers for all w8 devices. it is certainly good for simple key/value pair kind of data for user.
Now the caveats - currently, it does not support windows phone roaming currently. It is a feature ask for phone 8 - it can be voted up. Finally, this will not roam to android and other mobile devices.
Another way to think about it - when do you need to build it?
If it is simply per user data storage - backend need not come in place in first release. You can start with w8 roaming data support and in future release x, it can be moved from windows 8 roaming data to skydrive or your web api or azure. what I mean to say, it need not be built affront.
If the backend is going to allow sharing of data between users or experience over aggregated data from multiple users - then, it is all together different problem. in that case, roaming data is not a solution. a backend web api or service is must.
HTH.
Other references: guidelines for windows 8 roaming data.
While the are many social networks in the wild, most rely on data stored on a central site owned by a third party.
I'd like to build a solution, where data remains local on member's systems. Think of the project as an address book, which automagically updates contact's data as soon a a contact changes its coordinates. This base idea might get extended later on...
Updates will be transferred using public/private key cryptography using a central host. The sole role of the host is to be a store and forward intermediate. Private keys remain private on each member's system.
If two client are both online and a p2p connection could be established, the clients could transfer data telegrams without the central host.
Thus, sender and receiver will be the only parties which are able create authentic messages.
Questions:
Do exist certain protocols which I should adopt?
Are there any security concerns I should keep in mind?
Do exist certain services which should be integrated or used somehow?
More technically:
Use e.g. Amazon or Google provided services?
Or better use a raw web-server? If yes: Why?
Which algorithm and key length should be used?
UPDATE-1
I googled my own question title and found this academic project developed 2008/09: http://www.lifesocial.org/.
The solution you are describing sounds remarkably like email, with encrypted messages as the payload, and an application rather than a human being creating the messages.
It doesn't really sound like "p2p" - in most P2P protocols, the only requirement for central servers is discovery - you're using store & forward.
As a quick proof of concept, I'd set up an email server, and build an application that sends emails to addresses registered on that server, encrypted using PGP - the tooling and libraries are available, so you should be able to get that up and running in days, rather than weeks. In my experience, building a throw-away PoC for this kind of question is a great way of sifting out the nugget of my idea.
The second issue is that the nature of a social network is that it's a network. Your design may require you to store more than the data of the two direct contacts - you may also have to store their friends, or at least the public interactions those friends have had.
This may not be part of your plan, but if it is, you need to think it through early on - you may end up having to transmit the entire social graph to each participant for local storage, which creates a scalability problem....
The paper about Safebook might be interesting for you.
Also you could take a look at other distributed OSN and see what they are doing.
None of the federated networks mentioned on http://en.wikipedia.org/wiki/Distributed_social_network is actually distributed. What Stefan intends to do is indeed new and was only explored by some proprietary folks.
I've been thinking about the same concept for the last two years. I've finally decided to give it a try using Python.
I've spent the better part of last night and this morning writing a sockets communication script & server. I also plan to remove the central server from the equation as it's just plain cumbersome and there's no point to it when all the members could keep copies of their friend's keys.
Each profile could be accessed via a hashed string of someone's public key. My social network relies on nodes and pods. Pods are computers which have their ports open to the network. They help with relaying traffic as most firewalls block incoming socket requests. Nodes store information and share it with other nodes. Each node will get a directory of active pods which may be used to relay their traffic.
The PeerSoN project looks like something you might be interested in: http://www.peerson.net/index.shtml
They have done a lot of research and the papers are available on their site.
Some thoughts about it:
protocols to use: you could think exactly on P2P programs and their design
security concerns: privacy. Take a great care to not open doors: a whole system can get compromised 'cause you have opened some door.
services: you could integrate with the regular social networks through their APIs
People will have to install a program in their computers and remeber to open it everytime, like any P2P client. Leaving everything on a web-server has a smaller footprint / necessity of user action.
Somehow you'll need a centralized server to manage the searches. You can't just broadcast the internet to find friends. Or you'll have to rely uppon email requests to add somenone, and to do that you'll need to know the email in advance.
The fewer friends /contacts use your program, the fewer ones will want to use it, since it won't have contact information available.
I see that your server will be a store and forward, so the update problem is solved.