Is there a way on Sybase ASE to get the time since the ASE server was started?
There is a global variable ##boottime which can be used to find the time when the ASE server was started. Simply use -
select ##boottime to get the time when the server was started.
Are you talking about ASE server uptime; if yes, then probably you can find so using the query
select crdate from master..sysdatabases where name = 'tempdb'
which will return the boot time, as tempdb is recreated at boot.
Related
For context, we have an Azure Virtual Machine that we use to test our app before pushing the changes to production. This VM automatically shutdown every evening to save some energy. Is there a way to make it start when trying to access its DNS name (******.cloudapp.azure.com)? It would be better than starting it at a fixed time every morning but if it is impossible or too complicated, is there a way to make it start at a specified time?
Starting the vm using events requires some changes in your architecture.
You can start a vm at fixed time using automation accounts. Look at this tutorial on Microsoft learn to accomplish your goal
https://learn.microsoft.com/en-us/azure/automation/automation-solution-vm-management-config
If you had a firewall in front off your VM you could log the HTTP 404 events, then trigger a function that started your VM.
But it is not trivial and the firewall would add extra cost (it may be more expensive than your VM)
So the best way is probably just to start the VM at a fixed time.
If you need the VM permanently you could look into Reserved instances to save costs. It could save you upto 80% https://azure.microsoft.com/en-us/pricing/reserved-vm-instances/
We've got a classic VM on azure. All it's doing is running SQL server on it with a lot of DB's (we've got another VM which is a web server which is the web facing side which accesses the sql classic VM for data).
The problem we have that since yesterday morning we are now experiencing outages every 2-3 hours. There doesnt seem to be any reason for it. We've been working with Azure support but they seem to be still struggling to work out what the issue is. There doesnt seem to be anything in the event logs that give's us any information.
All that happens is that we receive a pingdom alert saying the box is out, we then can't remote into it as it times out and all database calls to it fail. 5 minutes later it will come back up. It doesnt seem to fully reboot or anything it just haults.
Any ideas on what this could be caused by? Or any places that we could look for better info? Or ways to patch this from happening?
The only thing that seems to be in the event logs that occurs around the same time is a DNS Client Event "Name resolution for the name [DNSName] timed out after none of the configured DNS servers responded."
Smartest or Quick Recovery:
Did you check SQL Server by connecting inside VM(internal) using localhost or 127.0.0.1/Instance name. If you can able connect SQL Server without any Issue internally and then Capture or Snapshot SQL Server VM and Create new VM using Capture VM(i.e without lose any data).
This issue may be occurred by following criteria:
Azure Network Firewall
Windows Server Update
This ended up being a fault with the node/sector that our VM was on. I fixed this by enlarging the size of our VM instance (4 core to 8 core), this forced azure to move it to another node/sector and this rectified the issue.
I am working in a business in New Zealand. We currently use a remote server (Plexus) to store a large amount of data (some tables > 2 billion rows). We have started down the SharePoint route, and I have created a number of databases and apps in SharePoint that use this data. Currently, I have to run a program in New Zealand that downloads the data to our local server and then pushes up that data into an Azure database, which the web apps connect to. I would like to remove this middle step for many reasons but the biggest reason is that the web connection between NZ and the US tends to result in a lot of time outs and long pulls due to having to pull large data sets across the Pacific. The remote database we are using is Plexus.
Ideally, I would like to have my C# code sitting in Azure and have this connect to the remote server directly. This way I could simply send the SQL request to Plex and have this data go directly into the Azure databases. The major advantage would be that this would mean it would all be based in the US which would make things a lot faster.
The major hurdle is that we need to install an ODBC Driver given to us by the remote server into Azure so it recognises the calls as genuine. Our systems adminstrator has said he has looked into it and it seems this can't be done?
I was hoping someone on the StackOverFlow community has encountered a similar issue and resolved it?
Note: Please dont think I am asking whether Azure has an ODBC connection because I know it does. I am not asking if I can connect TO Azure, I am asking if I can connect Azure to another external data source.
In a Worker Role/Cloud service in azure you can install the ODBC driver in a startup task using powershells ODBC commandlets.
More info here: Powershell Add-OdbcDsn and here: Powershell startup task in cloud services
One option is to create a virtual machine in the same Azure data center as your database and install your ODBC driver and your C# app.
I am looking for a solution that monitors a service on a server and runs a custom script when a problem is found.
To be more specific:
We have a service that relies on many Elastic IPs at EC2, when a problem occurs on the primary server, all those EIPs are required to move to a slave server.
I have written the script for the EIP failover, but my company wants to use an open source tool for the monitoring part.
I have looked into pacemaker/heartbeat solution but it seems too complex for what i want to achieve.
Please help me find a good solution for this problem, thanks in advance!
If your problem is as simple as watching a process and trigger scripts, monit will be your best friend:
http://mmonit.com/monit/
The good thing about monit is that it scales well if you have a lot of servers as it runs and executes everything locally on the machine being monitored.
Have you considered using Scout ? It allows you to write your custom scripts that get executed after triggers. For example you can setup a trigger from a third server that when it can't reach one of your EIPs then it's time to do the EIP switchover.
We are currently monitoring all of our servers using Scout and are pretty happy.
What are your disaster recovery plans for Windows Sharepoint Services 3.0 ?
Currently we are backuping all databases (1 content, admin, search and config) using sql backup tools, and backuping the front end server via dataprotector.
To test our backups, we use another server farm, restore the content database (following the procedure on technet) and create a new application that uses this database. We just have to redeploy solutions on the newly created sharepoint application.
However, we have to change database access credentials (on sql server) : the user accounts used on production aren't the same as those used on our "test" farm.
At the end, we can restore our content database and access all our sites. Searching doesn't work, but we're investigating.
Is this restore scenario reliable (as in supported by microsoft) ?
You can't really backup / restore both config database and search database:
restoring config database only work if your new farm have exactly the same server names
when you restore the search database, the fulltext index is not synchronize. however, this is not a problem as you can just reindex.
As a result, I would say that yes, this a reliable for content. But take care of:
You may have to redo some configuration (AAM, managed path...).
This does not include customization, you want to keep a backup of your solution
Reliability is in the eye of the beholder. In this case, if your tests of the restore process is successful, then yes, it is reliable.
A number of my clients run SharePoint (both MOSS and WSS) in virtual environments, SQL Server is also virtualised and backed up both with SQL tools and with Volume Shadow copy.
The advantage of a Virtual Environment is downtime is only as long as it takes your Virtual Server host to boot the images.
If you are not using Virtualisation, then remember to backup transaction logs regularly as this will make it easier to restore to a given point in the day - it also means that your transaction logs dont grow too big!
I prefer to use the stsadm -o backup command 'for catastrophic backup' as it says in the help. This can be scheduled but requires some maintenance of the backup metadata XML file when you start running out of disk space and need to archive older backups. It has the advantage of transferring over timer jobs (usually) and other configuration because as Nico says, restoring the config database won't work for most situations.
To restore, you can use the user interface which is nice and not have to mess around with much else. I think it restores your solutions as well but haven't tested that extensively.