I want to develop a logging techniques using CassandraLog4net Appender. I am getting Unavailable exception.
Can u tell me whether i have to create a keyspace or database before running this code?
Also, I am not able to use NODE TOOL When i click on it, it disappears again.
what changes should I make?
Please, find details of configuration of CassendraLog4netAppendar.
<KeyspaceName value="Logging" /><ColumnFamily value="LogEntries"/>\
<PlacementStrategy value="org.apache.cassandra.locator.NetworkTopologyStrategy" />
<StrategyOptions value="Datacentre1:1" /><ReplicationFactor value="1" />
<ConsistencyLevel value="QUORUM" />
<MaxBufferedRows value="1" />
UnavailableException means there aren't enough replicas available to satisfy your query. From your configuration I see a lot of inconsistency in your cluster config. Your log4net appender strategy options point to "Datacentre1"; your topology file lists a bunch of machines in "DC1", "DC2", and "DC3" with multiple racks; your keyspace is set up with only one DC called "DC1"; nodetool shows a single node listening on 127.0.0.1 (which doesn't correlate to any of your configured machines). So you're getting UnavailableException because you're asking for something that doesn't exist. You need to have a consistent configuration across the various pieces.
Related
I am trying to setup a collectd configuration that sends metrics to two separate http server endpoints.
My setup has Client (C) with collectd plugin running on it. Of all the metrics it collects, all of the metrics need to be sent to Server (A); and a part of the metrics also need to be sent to Server (B).
Setup:
I am using the following configuration for the write_http plugin:
<Plugin write_http>
<Node "serverA">
URL "http://servera/collectd-post"
Format "JSON"
StoreRates false
</Node>
<Node "serverB">
URL "http://serverb/collectd-post"
Format "JSON"
StoreRates false
</Node>
</Plugin>
Further, to selectively send the metrics, I tried using the following flow control configuration:
PostCacheChain "bchain"
<Chain "bchain">
<Rule "brule">
<Match "regex">
Plugin "(load|interface|disk|cpu|memory|df|swap|processes)"
</Match>
<Target "write">
Plugin "write_http/serverB"
</Target>
<Target "return">
</Target>
</Rule>
<Target "return">
</Target>
</Chain>
Per my understanding of the flow control rules, the above configuration should send metrics from plugins (load, interface, disk, cpu, memory, df, swap, and processes) to serverb node of write_http plugin (as part of the "brule" Rule). These matched metrics should also become available for sending to the other node in the http plugin (because of the Target "return" inside the "brule" Rule).
All other metrics should be processed and sent by the other node in the http plugin (because of the Target "return" outside the "brule" Rule).
The problem I am facing is that I cannot get the functionality working the way I want it to work.
Whats working:
ALL metrics duplicated to BOTH server A and server B, if I remove the PostCacheChain configuration.
Selected metrics sent ONLY to server B, if the PostCacheChain configuration is kept.
If using another write plugin, ALL metrics sent to that plugin and only SELECTED metrics sent to server B, when using PostCacheChain configuration
Whats not working:
When using the PostCacheChain as listed, NO metrics are sent to server A
Any solutions or suggestions to get the split destination working would be greatly appreciated.
ps:
The documentations for write_http plugin and the collectd flow-control both seem to indicate I am correct in my approach. However, to me, it looks like a write_http plugin is processed only once (even when it has multiple nodes) and that once a write plugin is referred in a rule, it will not be processed outside of the rule.
Gridgain has failover spi mechanism for failure of jobs on nodes.
However, we would like to configure a failure mechanism that throws exception even when once of the configured data nodes goes down.
How can we do this?
Are you trying to prevent failover for your tasks and throw an exception if a node that was in process of executing a job fails? (I'm not sure I understood you correctly, so please correct me if I'm wrong)
If I'm right, the easiest way is to configure NeverFailoverSpi, like this:
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
...
<property name="failoverSpi">
<bean class="org.apache.ignite.spi.failover.never.NeverFailoverSpi"/>
</property>
</bean>
Another option is to use IgniteCompute.withAsyncNoFailover() method. It's useful if you want to disable failover for a small subset of tasks, but still use default mechanisms for others. Here is an example:
IgniteCompute compute = ignite.compute().withAsyncNoFailover();
// Tasks executed with this compute instance will never failover.
compute.execute(MyTask1.class, "arg");
How to solve this error ultimately?
Server Error in '/****StatWCF_OData' Application.
Memory gates checking failed because the free memory (373817344 bytes) is less than 5% of total memory. As a result, the service will not be available for incoming requests. To resolve this, either reduce the load on the machine or adjust the value of minFreeMemoryPercentageToActivateService on the serviceHostingEnvironment config element.
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.InsufficientMemoryException: Memory gates checking failed because the free memory (373817344 bytes) is less than 5% of total memory. As a result, the service will not be available for incoming requests. To resolve this, either reduce the load on the machine or adjust the value of minFreeMemoryPercentageToActivateService on the serviceHostingEnvironment config element.
Source Error:
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.
Stack Trace:
[InsufficientMemoryException: Memory gates checking failed because the free memory (373817344 bytes) is less than 5% of total memory. As a result, the service will not be available for incoming requests. To resolve this, either reduce the load on the machine or adjust the value of minFreeMemoryPercentageToActivateService on the serviceHostingEnvironment config element.]
System.ServiceModel.Activation.ServiceMemoryGates.Check(Int32 minFreeMemoryPercentage, Boolean throwOnLowMemory, UInt64& availableMemoryBytes) +121924
System.ServiceModel.HostingManager.CheckMemoryCloseIdleServices(EventTraceActivity eventTraceActivity) +86
System.ServiceModel.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath, EventTraceActivity eventTraceActivity) +883
[ServiceActivationException: The service '/****StatWCF_OData/OData.svc' cannot be activated due to an exception during compilation. The exception message is: Memory gates checking failed because the free memory (373817344 bytes) is less than 5% of total memory. As a result, the service will not be available for incoming requests. To resolve this, either reduce the load on the machine or adjust the value of minFreeMemoryPercentageToActivateService on the serviceHostingEnvironment config element..]
System.Runtime.AsyncResult.End(IAsyncResult result) +650220
System.ServiceModel.Activation.HostedHttpRequestAsyncResult.End(IAsyncResult result) +210733
System.Web.CallHandlerExecutionStep.OnAsyncHandlerCompletion(IAsyncResult ar) +282
Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.17929
Solution is written in your post.
To resolve this, either reduce the load on the machine or adjust the
value of minFreeMemoryPercentageToActivateService on the
serviceHostingEnvironment config element.
The easiest way just add this into your web.config
<system.serviceModel>
<serviceHostingEnvironment minFreeMemoryPercentageToActivateService="0" />
</system.serviceModel>
Read more about serviceHostingEnvironment here.
Anyway how #Mr Grok correctly pointed it is an indication that your machine doesn't have enough physical memory and you should figure out why is this happening. It can be a serious problem.
I had this problem. Turns out SQL server was using over 29 gb of my available 32 gb.
Check your SQL Server if you have one. MS SQL Sever is designed to take up as much free space as you allow it. You can limit this in the maximum server memory box in the property tabs of the SQL Server.
I added serviceHostingEnvironment attribute to 0 in web.config.
<system.serviceModel>
<serviceHostingEnvironment minFreeMemoryPercentageToActivateService="0" />
</system.serviceModel>
This is the sipmplest way i found of doing this.
Well I had the same issue and I was looking for the solution. Most of the blogs were suggesting the same solution to add 'serviceHostingEnvironment' attribute in the web.config which is risky job as adding an attribute will effect entire IIS and it's hosted solutions and will restart IIS ultimately.
As error message was related to 'Memory storage issue' and we were asked to reduce the load on the server so what I did is just restarted SQL Server (MSSQLSERVER) service through service manager and everything gets back to normal and I got rid of the memory issue.
Window + R > services.msc > Restart Below Highlighted Service
The web.config worked for me, but also the SQL Server memory was an issue that needed addressed.
I was able to resolve the sql server memory issue without restarting the MSSQL server processes simply by reducing the server properties memory Maximum server memory to a lower value. The default was the effectively unlimited.
Without a MS SQL Service restart, the process automatically started reducing the memory footprint to the configured value.
I tried setting minFreeMemoryPercentageToActivateService="0" in the web.config files, but this still didn't fix in my case.
I'm using VMWare and it's performing super slow because I have too many snapshots!! I deleted the old ones, cleaned up some disk space and restarted the server and doing so resolved all the errors.
I'm new to VMWare and SharePoint, thought I could share my experience here!
Add this to your machine.config under the system.serviceModel section, which will set it for all sites:
<serviceHostingEnvironment minFreeMemoryPercentageToActivateService="0" />
You need Administrator privileges to edit this file.
See this answer on how to find machine.config.
You can check if there are any processes using excess memory before adding this setting. However for me, the default setting did more harm than good because there was a lot of "Standby" memory that was being used that can be ejected by the system when it is needed, so the system was not really out of memory.
You need to restart IIS using the IISReset command-line utility
From the Start menu, click Run.
In the Open box, type cmd, and click OK.
At the command prompt, type: iisreset /noforce
IIS attempts to stop all services before restarting. The IISReset command-line utility waits up to one minute for all services to stop.
I'm relatively new to Windows Azure and I need to get a better appreciation of how the Azure platform handles connection string configuration settings.
Assume I have an ASP.Net web project and this has a Web.Config connection string setting like the following:
<add name="MyDb" connectionString="Data Source=NzSqlServer01;Initial Catalog=MyAzureDb;User ID=joe;Password=bloggs;"
providerName="System.Data.SqlClient" />
I use this connection string for local testing and such. Let's assume I have a ServiceConfiguration.Local.cscfg file that holds that same connection information.
Now I'm ready to deploy out to my Azure instance. My ServiceConfiguration.Cloud.cscfg file looks like this:
<Setting name="MyDb"
value="Data Source=tcp:e54wn1clij.database.windows.net;Database=MyAzureDb{0};User ID=joe.bloggs#e54wn1clij;Password=reallysecure;Trusted_Connection=False;Encrypt=True;" />
What I'm trying to get my head around is that if I have code in my web application that's looking for a connection string called "MyDb" (for example, by calling this line of code: ConfigurationManager.ConnectionStrings["CeraDb"].ConnectionString), does Azure automagically know to look for a database called MyAzureDb1 or MyAzureDb2 based on the ServiceConfiguration file's connection string, or will the web application's code simply look for whatever's in Web.Config and fail to correctly load-balance the database connections?
You'd need to call RoleEnvironment.GetConfigurationSettingValue(...) to read the one in ServiceConfiguration.Cloud.cscfg.
The advantage to using .cscfg to store settings is that you can change them at runtime without having to deploy new code. Web.config is just another file that's part of your app, so you have to deploy a new package to update it, but the settings in .cscfg can be modified in the portal or by uploading a new .cscfg file without deploying and disturbing the app itself.
Intrinsically, unless you code otherwise, all Azure instances are created equal. In your case, this means that the configuration for two or more instances of the same Web Role will be the same.
So, if you've sharded your database and want different instances to read different databases, you'll need to get clever in your startup code and create something that allocates a shard to an instance. You've access to System.Environment.MachineName which can distinguish in code between instances once they're started.
There's a few ways to do this. One might be to have a central registry in (say) table storage that keeps a log of the last-seen-time of an instance for a shard. A background process on the server periodically writes out to this log. Then, on instance start, check the last-seen-time for each shard -- if any are "stale" (significantly older than the current time less the write interval) then the instance knows it can claim that shard for itself as the old instance has died.
(There are better ways to shard, however, generally around the data your system uses -- e.g. by the largest table in your system.)
I have a log4net ado appender writing to a SQL Server database. I like it, I think it's neat. Before I send it into production, I want to know what the behaviour will be if the database goes down.
I don't want the application to stop because the logging database is unavailable. I am presuming that log4net will just silently fail and not do anything, at least that's what I'm hoping. Can anyone confirm this or (better) point me to some documentation that will confirm this?
The appender (like all log4net appenders that I am aware of) will fail silently and thus stop logging. You can configure the appender to try to reconnect though:
<reconnectonerror value="True" />
In that case you probably want to specify a connection time out in your db connection string:
Connect Timeout=1
If you do not do that your application will be very slow if the db is offline.
If in .Net 4.5.1 or later you will also have to set ConnectRetryCount=0; in your connection string.
Appender Configuration:
<ReconnectOnError value="true" />
<connectionString value="...Connect Timeout=1;ConnectRetryCount=0;" />
If you do Async logging like Log4Net.Async then Connect Timeout may be left at default 15 seconds.
Details
.Net 4.5.1 adds ADO.NET connection resiliency which tells Log4Net the connection is still open even though it isn't and attempts to reconnect using ConnectRetryCount. However since we want to let Log4Net do the reconnecting and ConnectRetryCount has a max of 255 we should set ConnectRetryCount to 0. Source: https://issues.apache.org/jira/browse/LOG4NET-442 https://blogs.msdn.microsoft.com/dotnet/2013/06/26/announcing-the-net-framework-4-5-1-preview/
using reconnectonerror will surely enable you to start logging again once the DB server is available, but the intermediate log messages, while the server was down, will be lost. I think log4Net doesn't have any provision to persist the messages util the DB server is available.