I am using ClamAv in my site to scan the CVs. I have ClamAv of version 0.99.1 and i want to update to 0.99.2.
I tried to download the latest version of clamAV from https://www.clamav.net/downloads, But i am not sure about,If i put this code into my live and it will not create the problem.
But i don't know about how to manually update the version and i can not take any risk because it is connected with my live site.
Is it like manually change the files which is contained in the db folder? Like
daily.cvd
main.cvd
bytecode.cvd
Also i am getting warning about outdated antivirus signature database.
i don't know about how to manually update the version and i can not take any risk because it is connected with my live site.
If you’d like to ensure updating ClamAv server take no risk on production site, I recommend that you could set up another server/environment for setting up&test new version ClamAv. And if all works fine with new updates, we could create ClamClient object with the location of this new server, and another server could be used as a test server to test updates. Besides, as for updating live site, Azure App Service enable us to set up staging environments, you could set up and deploy app with updates in a staging deployment slot, and validate app changes in a staging deployment slot before swapping it with the production slot. Which could eliminate downtime.
Related
I am currently working to move TFS from its' current server to a new environment. My team has already completed the steps as seen in this Microsoft Documentation on moving TFS to a new Server.
We have already installed and migrated/restored our SQL Database in the new server and ensured all the prerequisites for TFS were installed. The TFS Admin Console is currently installed and we are trying to configure it by using the existing Tfs_Configure database. That all works without a problem, however, when we go to look at our existing Project Collections, the build service is still "linked", having the TFS Address set to the old server and not the one we migrated to.
I have detached the collections in the old environment and reattached them in the new environment, however, they still seem to be trying to build in the old server. I am reading that we needed to detach them prior to migrating any data over. Did we do something incorrectly, or rather, did we try to detach the collections too late into the process?
You need to unregister the build service that uses the <<oldcomputername>>. Register a build service with the <<newcomputername>>. And do the same for the agent and the controller.
On each build server, open the administration console and stop the
build service.
In the properties for the build service, update the communications
properties.
According to the above screenshot, you could see the build service is configured under project collection level.
Moreover, for vNext build agent you need to remove and re-configure an agent.
To remove the agent:
.\config remove
After you've removed the agent, you can configure it again.
You have to update your build services to point to the new server. For XAML build, you'll have to reconfigure the build controller. For the modern build system, you'll need to reconfigure your build agent(s).
I have a reasonably complex Access 2013 web app which is now in production on hosted O365 Sharepoint. I would like to take a backup (package snapshot) into a test environment, and then migrate this to production once development is complete (I certainly don't want to do development on the production system!). The problem is that the snapshot also backs up all data so uploading the new package over the top of the existing package in the sharepoint app repository reverts the data to the time of snapshot as well. Alternatively, rolling back to the original snapshot if there are issues would lose all data after the new package was applied.
I can easily get a second version of the app going by saving as a new application etc but this creates a new product ID etc in the app store. We also use a Access 2013 desktop accdb frontend to hook directly into the Azure SQL database to do all the stuff that the web app can't provide (formatted reports etc) so I cant just create a new app every time without dealing with all of the credential and database renaming issues.
So my question is, does anybody know how to safely operate a test environment for Access 2013 web app development? One needs to be able to apply an updated version, or rollback to the old one if there are problems without rolling back the data. With the desktop client I can just save a new copy of the accdb file every time obviously. I dont mind creating a new instance or link to the app on sharepoint each time, however this obviously generates a totally new database (server name, db location, login id's etc) as well. You would hope there is a way to upload and replace your app without touching the data, else how else can you develop without working directly in production?
Any answers would be really appreciated.
Thanks.
Our current deploy process goes something like this:
Use grunt to create production assets.
Create a datestamp and point files at our CDN (eg /scripts/20140324142354/app.min.js).
Sidenote: I've heard this process called "versioning" before but I'm not sure if it's the proper term.
Commit build to github.
Run git pull on the web servers to retrieve the new code from github.
This is a node.js site and we are using forever -w to watch for file changes and update the site accordingly.
We have a route setup in our app to serve the latest version of the app via /scripts/*/app.min.js.
The reason we version like this is because our CDN is set to cache JavaScript files indefinitely and this purposely creates a cache miss so that the code is updated on the CDN (and also in our users' browsers).
This works fine most of the time. But where it breaks down is if one of the servers lags a bit in checking out the new code.
Sometimes a client hits the page while a deploy is in progress and tries to retrieve the new JavaScript code from the CDN. The CDN tries to retrieve it but hits a server that isn't finished checking out the new code yet and caches an old or partially downloaded file causing all sorts of problems.
This problem is exacerbated by the fact that our CDN has many edge locations and so the problem isn't always immediately visible to us from our office. Some edge locations may have pulled down old/bad code while others may have pulled down new/good code.
Is there a better way to do these deployments that will avoid this issue?
As a general rule of thumb:
Don't do live upgrades. (unless the language supports it, but even then think twice)
Pulling code using git pull and then waiting for the app to notice changes to files sounds a lot like the 90's: uploading php files to an apache web server using ftp (or sftp if you are cool) and waiting for apache to notice that they were updated. It can't happen atomically, so of course there is a race condition. Some users WILL get a half built and broken site.
I recommend only upgrading your live and running application while no one is using it. Hopefully you have a pool of servers behind a load balancer of some sort, which will allow you to remove them one at a time and upgrade them.
This will mean that users will be able to use both the old and the new site at the same time depending on how and when they access it, but that is much better then not being able to access it at all.
Ideally you would be able to spin up copies of each of the web servers that you have running with the new version of the site. Check that the new version does work, and then atomically update the load balancer so that everyone gets bumped to the new site at the same time. And only once everything is verified to be working perfectly the old machines are shut down and decommissioned, or reused.
step 4 in your procedure should be:
git archive --remote $yourgithubrepo --prefix=$timestamp/ | tar -xf -
stop-server
ln -sf $timestamp current
start-server
your server would use the current directory (well, a symlink) at all times. no matter how long the deploy takes, your application is in a consistent state.
I'll go ahead and post our far-from-ideal monkey-patch that we're using right now.
We deploy once which may or may not go as planned, once we're sure the code is deployed on all the servers we do another build where the only thing that changes is the version number.
Then we deploy again server by server.
The race condition still exists but because the application code between the two versions is the same this masks the issue since no matter which server the CDN hits it gets the "latest" code.
Live to Development Migration
We are currently migrating some sharepoint sites from external live environments to development environments hosted on vm's. The sites are a mixture of websites and intranets. We have not had access to the live environments so can not specify structure of the sites.
The sites do have some customisations applied. Some are customisations are packaged via wsp packages for which we have the source code (somewhere previous developers have left it need to find it)
The sites setup we have no knowledge so the objective is to restore live back to a development vm so we can bug fix and make enhancements moving forward.
What steps should be go through for this.
We have outlined the following steps:
Take a copy of the content databases/s
Take a copy of the wsp packages straight from the live environment (using powershell)
Create site collections from live on dev
Restore the content databases from live on to these.
Deploy the wsps from live on to dev.
Activate the features from live on our development vm's.
What other steps are missing as I am sure they are.
What I would add here are:
Make sure your notifications don't go to the users of live environment
Make sure your BDC and custom connectivity components don't modify or otherwise load production external data sources
Document and verify (using PowerShell) that all assets are deployed accurately, because sometimes you'll face issues such as event receiver registration, etc.
Make sure your InfoPath forms are reconfigured to use the updated data sources
Make sure your Alternate Access Mappings and Incoming/Outgoing Email settings are adequate
I have a small solution that is composed out of 2 main projects a Mvc4 Web Api and a silverlight 5 Application. I've configured and deploy the application initially on the Azure platform and it all went great, but ever since when I deploy again the silverlight project does not get pushed and the online site has the old version.
I should mention all works great with the azure simulator on my local dev machine.
Anybody had a similar issue?
Regards,
I would suspect first (as Simon suggests) that the browser likely still has the previous client cached and loads that instead of downloading your new client.
You can use the version number in the code on your page that hosts the silverlight app to help. While it's easy for you to clear the cache - you don't really want to have to tell users to do that whenever you update.
Set the version to whatever your latest assembly version is (silverlight client project assembly), this will force the browser to download the client if the cached version is a lower number.
<param name="source" value="AppPath/App.xap?version=2.0.0.6"/>
Ok,
So after pulling my hair out, I finally figured out.
I have to change the build configuration to release in VS do a rebuild and then do publish because apparently the azure project does not do rebuild on the project when you publish it.
To solve this issue you'll need to identify the source of the problem (is it a client side problem where you have a caching issue or not). Even though you say caching isn't the problem we'll need to be sure about this first.
What I suggest is that you do the following first:
Activate Remote Desktop on your role
Connect through RDP and save this file to the role: http://support.microsoft.com/kb/841290 (fciv.exe)
Find the *.xap file (usually in E:\sitesroot) and get its checksum (using fciv.exe)
Modify the Silverlight project locally (maybe change a label or move around an element) to make sure its hash has changed.
Redeploy the application
Connect through RDP and use fciv.exe to get the checksum of the *.xap file once again
Compare both checksums
If the checksums are different, then it means that the deployment worked correctly and the Silverlight xap has been updated. If the checksum is the same, the problem lies with the deployment.
Please let us know the result so we can help you find the solution.