How can we use Solution Package (WSP) MOSS 2007 to synchronize lists from one server to another?
Have a look at this tool: Content Migration Wizard
It allows you to copy lists from farm to farm using the Migration API. You can also script it to run automatically.
Copying data / schema from one server to another is not supported and requires custom code.
Is it really necessary that the items 'exist' on both servers? It sounds error prone to me. Maybe it's possible to simply 'aggregate' the items on one server by using a webservice or an RSS feed.
If copying is required then I would create a SharePoint job that runs every x minutes/hours to do the syncronization. Let the custom job communicate with the web services on the other server.
Note : Since your job only runs every x minutes it means that your syncronization is not realtime!
Be carefull with large workloads. Make sure you don't stress your server by trying to synchronize 10.000 every minute.
Related
I'm fairly new to Function Apps, anyways we have about a dozen small programs currently running as Windows scheduled tasks on an Azure VM and we are looking to migrate these to PaaS. Most of these are small console type background processes that might make an API call, perform a calc, and store the result in a db, or maybe read some data from a db and then send out an email. We have a mixture of pwsh, Python, and .NET.
Anyways I was wondering how many repos I should have? I assume I would need at least 3 (one for each runtime stack). Also I didn't want to create a separate repo per app and end up having 50 git repos eventually. I don't know if it's best just to make some root level folders using the app names in the repo to keep the structure separate looking?
Lastly should each of these apps be hosted in their own function app (Azure Resource), or can I have several of the apps hosted in a single FA? I'm not sure if splitting everything up into separate function apps would make deployment easier or not. I guess ease of long term support/maintenance would be the most important aspect to me.
Short version: What is considered the best practice for logically grouping and creating the relationship/mapping between your apps, number of git repos, and azure function app resources?
We have built a project in Enterprise Guide for the purpose of creating a easy understandable and maintainable code. The project contain a set of process flows which run should be done in specific order. This project we need to run on a Linux Server machine, where the SAS Metadata Server is running.
Basic idea is to extract this project into SAS code, which we would be able to run from command line in Linux as a batch job.
Question 1:
Is there any other way to schedule a batch job in Linux-hosted SAS Server? I have read about VBS scripting for scheduling/running batch jobs, but in order this to be done on Linux Server, a installation of WINE is required, which on a production machine which already runs a number of other important applications, is almost completely out of question.
Is there a way to specify a complete project export into SAS code, provided that I give the specific order of running process flows? I have tried out ordered list, which is able to make you a list of tasks to run in order (although there is no way to choose a whole process flow as a single task), but unfortunately, this ordered list itself is later not possible to be exported as a SAS code.
Current solution we do is the following:
We export each single process flow of the SAS EG project into SAS code, and then create another SAS code with %include lines to run all the extracted codes in order that we want. This is of course a possible solution, but definitely not the most elegant one.
Question 2:
Since I don't know how exactly the code is being exported afterwards, are there any dangers I should bear in mind with the solution I chose.
Is there any other, more elegant way?
You have a couple of options from what I'm familiar with, plus I suspect if Dom happens by he'll know more. These answers are based on EG 6.1, which is the current version (ships with 9.4); it's possible some of these things may not be true in earlier versions.
First, if you're running Enterprise Guide from Windows, you can schedule the job locally (on any Windows machine with Enterprise Guide). You're not scheduling the server directly, you schedule Windows to launch an EG process that connects to the server and does its magic. That's how I largely interact with scheduling (because I have a fairly 'light' scheduling need).
Second, from the blog post "Four Ways to Schedule SAS Tasks", options 3 and 4 may be helpful for you. The SAS Platform Suite is designed in part for scheduling, and the options using SAS Management Console to schedule via operating system tools, are both very helpful.
Third, you may want to look into SAS Stored Processes, which should be schedulable. A process flow can be converted into a stored process.
For your specific questions:
Question 1: When you export a process flow or a project, at least in 6.1 you have the option to change the order in which the programs are exported. It's manual, so it's probably not perfect, but it does give you that option. (The code seems to be by default in creation order, which is sub-optimal.) The project export does group process flows together, but you don't have the option of manipulating the order of process flows - you have to move each program around, which would be tedious. It also of course gives you less flexibility if you need to multiply run programs.
Question 2: As Stig Eide points out in comments, make sure your System Option LRECL is > 256 (the default) or you run some risk of code being cut off. In 9.2+ this is modifiable; just place LRECL=32767in your config.sas file.
I am currently changing our database deployment strategy to use FluentMigration and have been reading up on how to run this. Some people have suggested that it can be run from Application_Start, I like this idea but other people are saying no but without specifying reasons so my questions are:
Is it a bad idea to run the database migration on application start and if so why?
We are planning on moving our sites to deploying to azure cloud services and if we don't run the migration from application_start how should/when should we run it considering we want to make the deployment as simple as possible.
Where ever it is run how do we ensure it is running only once as we will have a website and multiple worker roles as well (although we could just ensure the migration code is only called from the website but in the future we may increase to 2 or more instances, will that mean that it could run more than once?)
I would appreciate any insight on how others handle the migration of the database during deployment, particularly from the perspective of deployments to azure cloud services.
EDIT:
Looking at the comment below I can see the potential problems of running during application_start, perhaps the issue is I am trying to solve a problem with the wrong tool, if FluentMigrator isn't the way to go and it may not be in our case as we have a large number of stored procedures, views, etc. so as part of the migration I was going to have to use SQL scripts to keep them at the right version and migrating down I don't think would be possible.
What I liked about the idea of running during Application_Start was that I could build a single deployment package for Azure and just upload it to staging and the database migration would be run and that would be it, rather thank running manual scripts, and then just swap into production.
Running migrations during Application_Start can be a viable approach. Especially during development.
However there are some potential problems:
Application_Start will take longer and FluentMigrator will be run every time the App Pool is recycled. Depending on your IIS configuration this could be several times a day.
if you do this in production, users might be affected i.e. trying to access a table while it is being changed will result in an error.
DBA's don't usually approve.
What happens if the migrations fail on startup? Is your site down then?
My opinion ->
For a site with a decent amount of traffic I would prefer to have a build script and more control over when I change the database schema. For a hobby (or small non-critical project) this approach would be fine.
An alternative approach that I've used in the past is to make your migrations non-breaking - that is you write your migrations in such a way they can be deployed before any code changes and work with the existing code. This way both code and migrations both can be deployed independently 95% of the time. For example instead of changing an existing stored procedure you create a new one or if you want to rename a table column you add a new one.
The benefits of this are:
Your database changes can be applied before any code changes. You're then free to roll back any breaking code changes or breaking migrations.
Breaking migrations won't take the existing site down.
DBAs can run the migrations independently.
I am developing a website that I intend to run within Windows Azure using a single Web Role. The site will make use of the Sphinx Search engine which will need to run as a Windows Service. So, my question is this...is it possible to install the Sphinx Search Windows Service inside of a Web Role.
From my initial research into Azure I am thinking "yes" for the reason that the Web Role is a VM running IIS. Therefore I should be able to remote in, install the service, and it should work. :)
Does this sound right?
Installing software via RDP is not a viable solution with Web/Worker role instances, as these changes won't persist. You need to install it either from a startup script or from OnStart(). Since you want to install as a service, that would imply startup script, since it would need elevated permissions. Note: The installer must support unattended mode, where all parameters are specified via command line with no human interaction.
What about scalability? If you have more than one instance of your web role running, can sphinx run across two instances? From what I read, it supports ODBC-compliant databases, and you might be able to use it against Windows Azure SQL Database. If that's the case, can two sphinx engines run on two different machines accessing the same data store? If so, this sounds like a viable solution.
If installation cannot be automated, or you need something additional like MySQL, you may want to consider placing the sphinx search engine inside a Virtual Machine (new in June 2012). Now you can spin up a Windows 2008 Server, RDP into it, configure it exactly how you want it.
Strictly speaking yes, you could do that. However this makes the assumption that you would be running on one VM instance and also that the instance would never need restarting.
You should consider looking at Azure worker roles for any functionality that would normally exist as a windows service.
After reading your answers, and thinking about it a bit more, I think dropping the idea of installing a service would be the best course of action. I've been looking at the API for Lucene.NET (this may be the same for Sphinx) and it's possible to encapsulate the writing/managing of indexes, etc, within in code and therefore no need for a service.
For the Azure, there is a library for managing index files using both local and Azure storage which could be of use. Scenarios I've read about show that it's then possible to have a Web Role that will process HTTP requests and perform the searches and a Worker Role to accept DB changes via a queue and have it write them to the indexes.
I have already spending a lot of time googling for some solution but I'm helpless !
I got an MVC application and I'm trying to do "integration testing" for my Views using Coypu and SpecFlow. But I don't know how I should manage IIS server for this. Is there a way to actually run the server (first start of tests) and making the server use a special "test" DB (for example an in-memory RavenDB) emptied after each scenario (and filled during the background).
Is there a better or simpler way to do this?
I'm fairly new to this too, so take the answers with a pinch of salt, but as noone else has answered...
Is there a way to actually run the server (first start of tests) ...
You could use IIS Express, which can be called via the command line. You can spin up your website before any tests run (which I believe you can do with the [BeforeTestRun] attribute in SpecFlow) with a call via System.Diagnostics.Process.
The actual command line would be something like e.g.
iisexpress.exe /path:c:\iisexpress\<your-site-published-to-filepath> /port:<anyport> /clr:v2.0
... and making the server use a special "test" DB (for example an in-memory RavenDB) emptied after each scenario (and filled during the background).
In order to use a special test DB, I guess it depends how your data access is working. If you can swap in an in-memory DB fairly easily then I guess you could do that. Although my understanding is that integration tests should be as close to production env as possible, so if possible use the same DBMS you're using in production.
What I'm doing is just doing a data restore to my test DB from a known backup of the prod DB, each time before the tests run. I can again call this via command-line/Process before my tests run. For my DB it's a fairly small dataset, and I can restore just the tables relevant to my tests, so this overhead isn't too prohibitive for integration tests. (It wouldn't be acceptable for unit tests however, which is where you would probably have mock repositories or in-memory data.)
Since you're already using SpecFlow take a look at SpecRun (http://www.specrun.com/).
It's a test runner which is designed for SpecFlow tests and adds all sorts of capabilities, from small conveniences like better formatting of the Test names in the Test Explorer to support for running the same SpecFlow test against multiple targets and config file transformations.
With SpecRun you define a "Profile" which will be used to run your tests, not dissimilar to the VS .runsettings file. In there you can specify:
<DeploymentTransformation>
<Steps>
<IISExpress webAppFolder="..\..\MyProject.Web" port="5555"/>
</Steps>
</DeploymentTransformation>
SpecRun will then start up an IISExpress instance running that Website before running your tests. In the same place you can also set up custom Deployment Transformations (using the standard App.Config transformations) to override the connection strings in your app's Web.config so that it points to the in-memory DB.
The only problem I've had with SpecRun is that the documentation isn't great, there are lots of video demonstrations but I'd much rather have a few written tutorials. I guess that's what StackOverflow is here for.