Prevent users from copying ClickOnce application - security

I have a ClickOnce app in .NET 4.0;
How can i prevent users from copying the whole installation directory from one machine to another?
I don't need a solution that can't be hacked(if there's one), but i need one to stop everyday users from doing this;
I already tried the following:
Store the MAC address when the app is installed, and try to match it every time the app starts - this didn't work because users might use wired or wireless connections;
Match the HDD serial key - it doesn't work for WinXP;
Some extra info that might help:
I'm keeping track of my customers in a DB: when a customer requests the product, i add a new record ; When they run the app for the first time, the app asks for the ProdId, and changes the status so they can't install it again using the same ProdId;

Related

Why VS2022 Keeps logging me to wrong server

I've doing a project using the azure servers for a little bit now. After a month my subscription ended, and I had to get a new one, with a new acc (school project, so im too poor to get the normal sub). Now after setting everything up, and trying to connect to my new databasa (which to its address has added 2 at the end), my VS still tried to log me into the old server.
I've already tried to delete the whole project, copying it anew from gitlab, deleting cache from my pc, restarting it, and adding/removing connections in VS, but when the error occurs.
I'm getting the "Login failed for user X", when I'm checking the logs of the error, I see that it still tries to connect to the old server.
Any ideas what could cause it to keep logging to the old server?
I'm using in this part of the program only 1 Connection string, which I have changed to the proper one.

Ways to overcome the 230s hard coded limit to the SCM endpoint

Background
We have a PHP App Service and MySQL that is deployed using an Azure Devops Pipeline (YML). The content itself is a PHP site that it packaged up into a single file using Akeeba by an external supplier. The package is a Zip file (which can be deployed as a standard Zip deployment) and inside the Zip file is a huge JPA file. The JPA is essentially the whole web site plus database tables, settings, file renames and a ton of other stuff all rolled into one JPA file. Akeeba essentially unzips the files, copies them to the right places, does all the DB stuff and so on. To kick the process off, we can simply connect to a specific URL (web site + path) and run the PHP which does all the clever unpackaging via a web GUI. But, we want to include this stage in the pipeline instead so that the process is fully automated end to end. Akeeba has a CLI as an alternative to the Web GUI deployment, so it should go like this:
Create web app
Deploy the web site ZIP (zipDeploy)
Use the REST API to access Kudu and run the relevant command (php install.php web.jpa) to unpack the jpa and do the MySQL stuff - this normally takes well over 30 minutes (it is a big site and it has a lot of "stuff" to do - but, it does actually work in the end).
The problem is that the SCM REST API has a hard-coded 230s limit as described here: https://blog.headforcloud.com/2016/11/15/azure-app-service-hard-timeout-limit/
So, the unpack stage keeps throwing "Invoke-RestMethod : 500 - The request timed out" exactly on the 230s mark.
We have tried SCM_COMMAND_IDLE_TIMEOUT and WEBJOBS_IDLE_TIMEOUT but, unsurprisingly, they did not make any difference.
$cmd=#{"command"="php .\site\wwwroot\install.php .\site\wwwroot\web.jpa .\site\wwwroot"}
Invoke-RestMethod -Uri $url -Headers #{"Authorization"="Basic $creds"} -Body (ConvertTo-Json($cmd)) -Method Post -ContentType "application/json" -TimeoutSec 7200
I can think of a few hypothetical ways around it (some quite eccentric):
Find another way to run CLI commands inside the Web App after deployment other than the Kudu REST API. Is there such a thing? I Googled and checked SO but all I found were pointers to the way we do it (or try to do it) now.
Use something like Selenium to click the GUI buttons instead of using the CLI. (I do not know if they would suffer a timeout.)
Instead of running the command via Kudu REST, use the same API to create and deploy a script to the web server, start it and then let the REST API exit whilst the script still runs on the Web App. Essentially, bodge an async call but without the callback and then have the pipeline check in on the site at, say, 5 minute intervals. Clunky.
Extend the 230s limit - but I do not think that Microsoft make this possible.
Make the web site as fast as possible during the deployment in the hope of getting it under the 4-minute mark and then down-scale it. Yuk!
See what the Akeeba JPA unpacking actually does, unpack it pre-deployment and do what the unpackage process does but controlled via the Pipeline. This is potentially a lot of work and would lose the support of the supplier.
Give up on an automated deployment. That would rather defeat much of the purpose of a Devops pipeline.
Try AWS + terraform instead. That's not a approved infrastructure environment, however.
Given that Microsoft understandably do not want long-running API calls hanging around, I understand why the limit exists. However, I would expect therefore there to be a mechanism to interact with an App Service file system via a CLI in another way. Does anyone know how?
The 4 minute idle timeout on the TCP level and this is implemented on the Azure hardware load balancer. This timeout is not configurable and this cannot be changed. One thing I want to mention is that this is idle timeout at the TCP level which means that if the connection is idle only and no data transfer happening, only then this timeout is hit. To provide more info, this will hit if the web application got the request and kept processing the request for > 4minutes without sending any data back.
Resolution
Ideally in a web application, it is not good to keep the underlying HTTP request open and 4 minutes is a decent amount of time. If you have a requirement about background processing within your web application, then the recommended solution is to use Azure WebJobs and have the Azure Webapp interact with the Azure Webjob to notify once the background processing is done (there are many ways that Azure provides like queues triggers etc. and you can choose the method that suits you the best). Azure Webjobs are designed for background processing and you can do as much background processing as you want within them. I am sharing a few articles that talk about webjobs in detail
· http://www.hanselman.com/blog/IntroducingWindowsAzureWebJobs.aspx
· https://azure.microsoft.com/en-us/documentation/articles/websites-webjobs-resources/
============================================================================
It totally depends on the app. Message Queue comes to mind. There are a lot of potential solutions and it will be up to you to decide.
============================================================================
Option #1)
You can change the code to send some sort of header to continue to the client to keep the session open.
A sample is shown here
This shows the HTTP Headers with the Expect 100-continue header:
https://msdn.microsoft.com/en-us/library/aa287673%28v=vs.71%29.aspx?f=255&MSPPError=-2147217396
This shows how to add a Header to the collection:
https://msdn.microsoft.com/en-us/library/aa287502(v=vs.71).aspx
Option #2) Progress bar
This sample shows how to use a progress bar:
https://social.msdn.microsoft.com/Forums/vstudio/en-US/d84f4c89-ebbf-44d3-bc4e-43525ae1df45/how-to-increase-progressbar-when-i-running-havey-query-in-sql-server-or-oracle-?forum=csharpgeneral
Option #3) A common practice to keep the connection active for a longer period is to use TCP Keep-alive. Packets are sent when no activity is detected on the connection. By keeping on-going network activity, the idle timeout value is never hit and the connection is maintained for a long period
Option #4) You can also try the option of hosting your application as an IaaS VM instead of a APP SERVICE. This may avoid the ARR timeout issue because its architecture is different and I believe that the time-out is configurable.

How to find the source of audit failure in Windows server?

I get audit failure messages in the security event logs, every second.
Event id: 4625
logon type: 3
Process name: lasass.exe
failed login: schOPSSH
The COPSSH (SSH for Windows) was installed on the machine and its user was svcOPSSH, not schOPSSH. So, I thought that the person who installed misconfigured it, I stopped the SSH service, even I removed the software and deleted all users except admin, but I still get login attempts with that username. I checked all services and its credentials, everything was okay. I search the registry with the user "schOPSSH", couldn't find any record.
Do you have any idea to find the source of this login attempt? Thanks.
I figured it out. In my case, there was brute force attack from external IP. I didn't expect that. I used the MS Network Monitor to check incoming requests.

Is there a way to delete devices from pubnub app?

I am hitting 100 devices limit on PubNub's free plan, but most of those devices are not active. So can I somehow remove them ?
PubNub Daily Active Devices
They are not removable because it is a running count of unique devices per day. The next day, the count starts over. It is likely this is either a mobile app with frequently changing IPs (on the road, like ride share, taxi dispatch, etc) or you are using a host like Heroku or Google App Engine where each execution of PubNub operation (probably a publish), the server host is creating a new dynamic IP address.
If you are using GAE or Heroku or other host/platform/cloud that uses dynamic IPs when it creates a process, then you will continue to see higher than actual IPs (daily active devices). You will want to implement UUID reuse. The new pricing is based on UUIDs now anyways but PubNub will generate a new UUID with every PubNub init so you need to persist a UUID and reuse it (set the UUID on the PubNub instance) so it doesn't over count UUIDs.

Beanstalkd / Pheanstalk security issue

I have just started using beanstalkd and pheanstalk and I am curious whether the following situation is a security issue (and if not, why not?):
When designing a queue that will contain jobs for an eventual worker script to pick up and preform SQL database queries, I asked a friend what I could do to prevent an online user from going into port 11300 of my server, and inserting a job into the queue himself and hence causing the job to be executed with malicious code. I was told that I could include a password inside the job being sent.
Though after some time passed, I recognized that someone could preform a few simple commands on a terminal and obtain the job inside the queue, and hence find the password, and then create jobs with the password included:
telnet thewebsitesipaddress 11300 //creating a telnet connection
list-tubes //finding which tubes are currently being used
use a_tube_found //using one of the tubes found
peek-ready //see whats inside one of the jobs and find the password
What could be done to make sure this does not happen and my queue doesn't get hacked / controlled?
Thanks in advance!
You can avoid those situations by placing beanstalkd behind a firewall or in a private network.
DigitalOcean (for example) offers such a service where you have a private network IP address which can be accessed only from servers of the same location.
We've been using beanstalkd in our company for more than a year, and we haven't had any of those issues yet.
I see, but what if the producer was a page called index.php, where when someone entered it, a job would be sent to the queue. In this situation, wouldn't the server have to be an open network?
The browser has no way to get in contact with the job server, it only access the resources /you/ allow them to, that is the view page. Only the back-end is allowed to access the job server. Also, if you build the web application in a certain way that the front-end is separated from the back-end, you're going to have even less potential security issues.

Resources