Azure self hosted agent offline - azure

It seems my Azure self hosted Linux agent has crashed:
/etc/systemd/system/vsts.agent.***.Dev\x20Tools.deploytools\x2d***.service
● vsts.agent.***.Dev\x20Tools.deploytools\x2d***.service - Azure Pipelines Agent (***.Dev Tools.deploytools-***)
Loaded: loaded (/etc/systemd/system/vsts.agent.***.Dev\x20Tools.deploytools\x2d***.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-07-27 14:20:41 UTC; 5min ago
Main PID: 26601 (runsvc.sh)
Tasks: 8 (limit: 2322)
Memory: 11.0M
CGroup: /system.slice/vsts.agent.***.Dev\x20Tools.deploytools\x2d***.service
├─26601 /bin/bash /home/peter/runsvc.sh
└─26604 ./externals/node10/bin/node ./bin/AgentService.js
Jul 27 14:26:13 deploytools-*** runsvc.sh[26601]: at Interop.ThrowExceptionForIoErrno(ErrorInfo errorInfo, String path, Boolean isDirectory, Func`2 errorRewriter)
Jul 27 14:26:13 deploytools-*** runsvc.sh[26601]: at Microsoft.Win32.SafeHandles.SafeFileHandle.Open(String path, OpenFlags flags, Int32 mode)
Jul 27 14:26:13 deploytools-*** runsvc.sh[26601]: at System.IO.FileStream.OpenHandle(FileMode mode, FileShare share, FileOptions options)
Jul 27 14:26:13 deploytools-*** runsvc.sh[26601]: at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options)
Jul 27 14:26:13 deploytools-*** runsvc.sh[26601]: at Microsoft.VisualStudio.Services.Agent.HostTraceListener.CreatePageLogWriter()
Jul 27 14:26:13 deploytools-*** runsvc.sh[26601]: at Microsoft.VisualStudio.Services.Agent.HostTraceListener..ctor(String logFileDirectory, String logFilePrefix, Int32 pageSizeLimit, Int32 retentionDays)
Jul 27 14:26:13 deploytools-*** runsvc.sh[26601]: at Microsoft.VisualStudio.Services.Agent.HostContext..ctor(String hostType, String logFile)
Jul 27 14:26:13 deploytools-*** runsvc.sh[26601]: at Microsoft.VisualStudio.Services.Agent.Listener.Program.Main(String[] args)
Jul 27 14:26:13 deploytools-*** runsvc.sh[26601]: Agent listener exited with error code null
Jul 27 14:26:13 deploytools-*** runsvc.sh[26601]: Agent listener exit with undefined return code, re-launch agent in 5 seconds.
But I can't find a proper error. Can anyone see what is going on?

Related

Dynamically receive json data in rust with reqwest

Ive been trying to receive json data with reqwest and serde but I keep getting the error:
Error: reqwest::Error { kind: Decode, source: Error("expected value", line: 1, column: 1) }
This is my code so far:
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let url: String = String::from("https://api.slothpixel.me/api/players/leastrio");
let echo_json: serde_json::Value = reqwest::Client::new()
.get(url)
.send()
.await?
.json()
.await?;
println!("{:#?}", echo_json);
Ok(())
}
reqwest = { version = "0.11", features = ["json"] }
tokio = { version = "1", features = ["full"] }
serde_json = "1"
So I've trie a few things, and it seems you need to add a user agent for it to work. No idea why the documentation doesn't mention it. And I guess reqwest doesn't provied one by default.
reqwest::Client::new()
.get(url)
.header("User-Agent", "Reqwest Rust Test")
.send()
.await?
.json()
.await?;
I used this and it worked!
It should work a quick test with:
tracing = "0.1"
tracing-subscriber = "0.2"
Adding to main:
let subscriber = tracing_subscriber::FmtSubscriber::builder()
.with_max_level(tracing::Level::TRACE)
.finish();
tracing::subscriber::set_global_default(subscriber)
.expect("setting default subscriber failed");
dbg!(reqwest::Client::new().get(&url).send().await?.text().await);
RUST_LOG=trace cargo run
Jul 13 09:45:59.232 TRACE hyper::client::pool: checkout waiting for idle connection: ("https", api.slothpixel.me)
Jul 13 09:45:59.234 TRACE hyper::client::connect::http: Http::connect; scheme=Some("https"), host=Some("api.slothpixel.me"), port=None
Jul 13 09:45:59.234 DEBUG hyper::client::connect::dns: resolving host="api.slothpixel.me"
Jul 13 09:45:59.277 DEBUG hyper::client::connect::http: connecting to [2606:4700:3036::6815:5b3]:443
Jul 13 09:45:59.301 DEBUG hyper::client::connect::http: connected to [2606:4700:3036::6815:5b3]:443
Jul 13 09:45:59.352 TRACE hyper::client::conn: client handshake Http1
Jul 13 09:45:59.353 TRACE hyper::client::client: handshake complete, spawning background dispatcher task
Jul 13 09:45:59.353 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Busy }
Jul 13 09:45:59.353 TRACE hyper::client::pool: checkout dropped for ("https", api.slothpixel.me)
Jul 13 09:45:59.354 TRACE encode_headers: hyper::proto::h1::role: Client::encode method=GET, body=None
Jul 13 09:45:59.355 DEBUG hyper::proto::h1::io: flushed 76 bytes
Jul 13 09:45:59.355 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: KeepAlive, keep_alive: Busy }
Jul 13 09:45:59.376 TRACE hyper::proto::h1::conn: Conn::read_head
Jul 13 09:45:59.377 TRACE parse_headers: hyper::proto::h1::role: Response.parse([Header; 100], [u8; 953])
Jul 13 09:45:59.377 TRACE parse_headers: hyper::proto::h1::role: Response.parse Complete(937)
Jul 13 09:45:59.378 DEBUG hyper::proto::h1::io: parsed 14 headers
Jul 13 09:45:59.378 DEBUG hyper::proto::h1::conn: incoming body is content-length (16 bytes)
Jul 13 09:45:59.378 TRACE hyper::proto::h1::decode: decode; state=Length(16)
Jul 13 09:45:59.379 DEBUG hyper::proto::h1::conn: incoming body completed
Jul 13 09:45:59.379 TRACE hyper::proto::h1::conn: maybe_notify; read_from_io blocked
Jul 13 09:45:59.379 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
Jul 13 09:45:59.379 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
Jul 13 09:45:59.380 TRACE hyper::client::pool: put; add idle connection for ("https", api.slothpixel.me)
Jul 13 09:45:59.380 DEBUG hyper::client::pool: pooling idle connection for ("https", api.slothpixel.me)
[Jul 13 09:45:59.380 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
src\main.rs:12] reqwest::Client::new().get(&url).send().await?.text().await = Ok(
"error code: 1020",
)
Jul 13 09:45:59.381 TRACE hyper::proto::h1::dispatch: client tx closed
Jul 13 09:45:59.381 TRACE hyper::client::pool: pool closed, canceling idle interval
Jul 13 09:45:59.382 TRACE hyper::client::pool: checkout waiting for idle connection: ("https", api.slothpixel.me)
Jul 13 09:45:59.382 TRACE hyper::proto::h1::conn: State::close_read()
Jul 13 09:45:59.382 TRACE hyper::client::connect::http: Http::connect; scheme=Some("https"), host=Some("api.slothpixel.me"), port=None
Jul 13 09:45:59.382 TRACE hyper::proto::h1::conn: State::close_write()
Jul 13 09:45:59.382 DEBUG hyper::client::connect::dns: resolving host="api.slothpixel.me"
Jul 13 09:45:59.383 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
Jul 13 09:45:59.383 DEBUG hyper::client::connect::http: connecting to [2606:4700:3036::6815:5b3]:443
Jul 13 09:45:59.383 TRACE hyper::proto::h1::conn: shut down IO complete
Jul 13 09:45:59.396 DEBUG hyper::client::connect::http: connected to [2606:4700:3036::6815:5b3]:443
Jul 13 09:45:59.428 TRACE hyper::client::conn: client handshake Http1
Jul 13 09:45:59.428 TRACE hyper::client::client: handshake complete, spawning background dispatcher task
Jul 13 09:45:59.429 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Busy }
Jul 13 09:45:59.429 TRACE hyper::client::pool: checkout dropped for ("https", api.slothpixel.me)
Jul 13 09:45:59.430 TRACE encode_headers: hyper::proto::h1::role: Client::encode method=GET, body=None
Jul 13 09:45:59.430 DEBUG hyper::proto::h1::io: flushed 76 bytes
Jul 13 09:45:59.430 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: KeepAlive, keep_alive: Busy }
Jul 13 09:45:59.451 TRACE hyper::proto::h1::conn: Conn::read_head
Jul 13 09:45:59.451 TRACE parse_headers: hyper::proto::h1::role: Response.parse([Header; 100], [u8; 953])
Jul 13 09:45:59.452 TRACE parse_headers: hyper::proto::h1::role: Response.parse Complete(937)
Jul 13 09:45:59.452 DEBUG hyper::proto::h1::io: parsed 14 headers
Jul 13 09:45:59.452 DEBUG hyper::proto::h1::conn: incoming body is content-length (16 bytes)
Jul 13 09:45:59.453 TRACE hyper::proto::h1::decode: decode; state=Length(16)
Jul 13 09:45:59.453 DEBUG hyper::proto::h1::conn: incoming body completed
Jul 13 09:45:59.453 TRACE hyper::proto::h1::conn: maybe_notify; read_from_io blocked
Jul 13 09:45:59.453 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
Jul 13 09:45:59.454 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
Jul 13 09:45:59.454 TRACE hyper::client::pool: put; add idle connection for ("https", api.slothpixel.me)
Jul 13 09:45:59.454 DEBUG hyper::client::pool: pooling idle connection for ("https", api.slothpixel.me)
Jul 13 09:45:59.454 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
Jul 13 09:45:59.454 TRACE hyper::client::pool: pool closed, canceling idle interval
Jul 13 09:45:59.454 TRACE hyper::proto::h1::dispatch: client tx closed
Jul 13 09:45:59.455 TRACE hyper::proto::h1::conn: State::close_read()
Jul 13 09:45:59.455 TRACE hyper::proto::h1::conn: State::close_write()
Jul 13 09:45:59.455 TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
Jul 13 09:45:59.456 TRACE hyper::proto::h1::conn: shut down IO complete
Error: reqwest::Error { kind: Decode, source: Error("expected value", line: 1, column: 1) }
error code: 1020 not even a json object contrary to https://api.slothpixel.me/api/players/ that return a json object "error", I suggest a bug report this to https://github.com/slothpixel/core or where it's adequate cause this error is odd.

Token is getting invalidated using nodejs speakeasy library

My purpose is to expire a token after 1hr (3600 secs). While trying with nodejs speakeasy the token is getting invalidated much before that. The below logs are for "1, 10 and 60 minutes" and that also getting invalidated muche before the 1 minute. Max of the time I am getting inconsistent results.
Partial code snippet
let secret = speakeasy.generateSecret({
length: 10
});
let seconds= 3600; //1Hr
let token = speakeasy.totp({
secret: secret.base32,
step: seconds
});
let otp = {
"secret": secret.base32.toString(),
"token": token
};
function checkOTP(otp) {
let verified = speakeasy.totp.verify({
secret: otp.secret,
token: otp.token,
step: seconds
});
return verified;
}
Am I doing something wrong? Few console logs from a sample script:
For 1 minute - Invalidated before 18secs
[ Fri Dec 08 2017 09:16:18 GMT-0800 (Pacific Standard Time) ](true) 9:16:59 AM
[ Fri Dec 08 2017 09:16:18 GMT-0800 (Pacific Standard Time) ](false) 9:17:00 AM
For 10Mins - Invalidated before 7minutes
[ Fri Dec 08 2017 09:18:28 GMT-0800 (Pacific Standard Time) ](true) 9:19:59 AM
[ Fri Dec 08 2017 09:18:28 GMT-0800 (Pacific Standard Time) ](true) 9:19:59 AM
[ Fri Dec 08 2017 09:18:28 GMT-0800 (Pacific Standard Time) ](true) 9:19:59 AM
[ Fri Dec 08 2017 09:18:28 GMT-0800 (Pacific Standard Time) ](true) 9:19:59 AM
[ Fri Dec 08 2017 09:18:28 GMT-0800 (Pacific Standard Time) ](false) 9:20:00 AM
For 1Hr - Invalidated before 7minutes
[ Fri Dec 08 2017 11:07:01 GMT-0800 (Pacific Standard Time) ](true) 11:56:41 AM
[ Fri Dec 08 2017 11:07:01 GMT-0800 (Pacific Standard Time) ](true) 11:56:43 AM
[ Fri Dec 08 2017 11:07:01 GMT-0800 (Pacific Standard Time) ](false) 12:00:37 PM
What is the appropriate way to validate within the above window?
From the readme of speakeasy it looks like your token parameters are wrong:
var token = speakeasy.totp({
secret: secret.base32,
encoding: 'base32',
time: 1453667708 // You have this as 'step' not 'time'
});

Exception in Spring Integration - Multiple channels defined on declaring a channel bean

I'm a newbie to spring integration and I'm using the following code,
package services.api;
public interface GreetingService {
public void greetUsers(String userName);
}
package services.impl;
import services.api.GreetingService;
public class GreetServiceImpl implements GreetingService {
#Override
public void greetUsers(String userName) {
if (userName != null && userName.trim().length() > 0) {
System.out.println("Hello " + userName);
}
}
}
package main;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import org.springframework.integration.Message;
import org.springframework.integration.MessageChannel;
import org.springframework.integration.support.MessageBuilder;
public class Main {
public static void main(String[] args)
{
ApplicationContext applicationContext = new ClassPathXmlApplicationContext("applicationContext.xml");
MessageChannel messageChannel = applicationContext.getBean(MessageChannel.class);
Message<String> message = MessageBuilder.withPayload("World").build();
messageChannel.send(message);
}
}
<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns:beans="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://springframework.org/schema/integration"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://springframework.org/schema/integration http://springframework.org/schema/integration/spring-integration.xsd">
<channel id="pushChannel" />
<service-activator input-channel="pushChannel" ref="service"
method="greetUsers" />
<beans:bean id="service" class="services.impl.GreetServiceImpl" />
</beans:beans>
I'm getting the following error, eventhough I've declared only one message channel
Mar 04, 2014 4:46:23 PM org.springframework.context.support.ClassPathXmlApplicationContext prepareRefresh
INFO: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext#34d0cdd0: startup date [Tue Mar 04 16:46:23 IST 2014]; root of context hierarchy
Mar 04, 2014 4:46:23 PM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from class path resource [applicationContext.xml]
Mar 04, 2014 4:46:23 PM org.springframework.beans.factory.config.PropertiesFactoryBean loadProperties
INFO: Loading properties file from URL [jar:file:/D:/Personal%20Data/Softwares/spring-framework-4.0.0.RELEASE-dist/SpringIntegration/spring-integration-3.0.0.RELEASE-dist/spring-integration-3.0.0.RELEASE/libs/spring-integration-core-3.0.0.RELEASE.jar!/META-INF/spring.integration.default.properties]
Mar 04, 2014 4:46:23 PM org.springframework.integration.config.xml.IntegrationNamespaceHandler registerHeaderChannelRegistry
INFO: No bean named 'integrationHeaderChannelRegistry' has been explicitly defined. Therefore, a default DefaultHeaderChannelRegistry will be created.
Mar 04, 2014 4:46:23 PM org.springframework.integration.config.xml.DefaultConfiguringBeanFactoryPostProcessor registerErrorChannel
INFO: No bean named 'errorChannel' has been explicitly defined. Therefore, a default PublishSubscribeChannel will be created.
Mar 04, 2014 4:46:23 PM org.springframework.integration.config.xml.DefaultConfiguringBeanFactoryPostProcessor registerTaskScheduler
INFO: No bean named 'taskScheduler' has been explicitly defined. Therefore, a default ThreadPoolTaskScheduler will be created.
Mar 04, 2014 4:46:23 PM org.springframework.beans.factory.config.PropertiesFactoryBean loadProperties
INFO: Loading properties file from URL [jar:file:/D:/Personal%20Data/Softwares/spring-framework-4.0.0.RELEASE-dist/SpringIntegration/spring-integration-3.0.0.RELEASE-dist/spring-integration-3.0.0.RELEASE/libs/spring-integration-core-3.0.0.RELEASE.jar!/META-INF/spring.integration.default.properties]
Mar 04, 2014 4:46:23 PM org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler initialize
INFO: Initializing ExecutorService 'taskScheduler'
Mar 04, 2014 4:46:23 PM org.springframework.context.support.DefaultLifecycleProcessor start
INFO: Starting beans in phase -2147483648
Mar 04, 2014 4:46:23 PM org.springframework.integration.endpoint.EventDrivenConsumer logComponentSubscriptionEvent
INFO: Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
Mar 04, 2014 4:46:23 PM org.springframework.integration.channel.PublishSubscribeChannel adjustCounterIfNecessary
INFO: Channel 'org.springframework.context.support.ClassPathXmlApplicationContext#34d0cdd0.errorChannel' has 1 subscriber(s).
Mar 04, 2014 4:46:23 PM org.springframework.integration.endpoint.EventDrivenConsumer start
INFO: started _org.springframework.integration.errorLogger
Mar 04, 2014 4:46:23 PM org.springframework.context.support.DefaultLifecycleProcessor start
INFO: Starting beans in phase 0
Mar 04, 2014 4:46:23 PM org.springframework.integration.endpoint.EventDrivenConsumer logComponentSubscriptionEvent
INFO: Adding {service-activator} as a subscriber to the 'pushChannel' channel
Mar 04, 2014 4:46:23 PM org.springframework.integration.channel.DirectChannel adjustCounterIfNecessary
INFO: Channel 'org.springframework.context.support.ClassPathXmlApplicationContext#34d0cdd0.pushChannel' has 1 subscriber(s).
Mar 04, 2014 4:46:23 PM org.springframework.integration.endpoint.EventDrivenConsumer start
INFO: started org.springframework.integration.config.ConsumerEndpointFactoryBean#0
Exception in thread "main" org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type [org.springframework.integration.MessageChannel] is defined: expected single matching bean but found 3: pushChannel,nullChannel,errorChannel
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:312)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:985)
at main.Main.main(Main.java:17)
I suggest you to read more Docs:
http://docs.spring.io/spring-integration/docs/3.0.1.RELEASE/reference/html
http://www.manning.com/fisher
As you see the framework provides two explicit channels: nullChannel, errorChannel.
And they aren't the last beans which are populated by framework.
To fix your issue just provide the id of your channel to applicationContext.getBean

TargetNotAMemberException while executing a task using distributed executor service - hazelcast

I am trying to use the distributed executor service for hazelcast 3.1 and find that i am unable to use submitToMember(task,member). In my example below 10.69.108.60 is my local machine and 170.194.100.111 is my remote machine. I am able to get return value in my future when the member is my local machine but gives me a TargetNotAMemberException if the member is remote machine.
Below is the code
public class DistExecutionTest {
public static void main(String args[]){
DistributedExecutor dex = new DistributedExecutor();
try {
Member member = new MemberImpl(new Address("170.194.100.111",5701), false );
String msg;
msg = dex.echoOnTheMember("Hey youuuu!", member);
System.out.println(msg);
} catch (UnknownHostException e1) {
e1.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}catch (Exception e) {
e.printStackTrace();
}
}
}
and
public class DistributedExecutor {
Config config;
NetworkConfig network;
JoinConfig join;
DistributedExecutor(){
config = new Config();
network = config.getNetworkConfig();
// network.setPort(5701);
join = network.getJoin();
join.getMulticastConfig().setEnabled(false);
join.getTcpIpConfig().addMember("170.194.100.111").addMember("10.69.108.60").setEnabled(true);
network.getInterfaces().setEnabled(true).addInterface("170.194.100.*").addInterface("10.69.108.*");
}
public String echoOnTheMember(String input, Member member) throws Exception {
Callable<String> task = new DistObject(input);
HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);
IExecutorService executorService = hz.getExecutorService("default");
Future<String> future = executorService.submitToMember(task, member);
String distObjectResult = future.get();
return distObjectResult;
}
}
and
public class Echo implements Callable<String>, Serializable, HazelcastInstanceAware {
private static final long serialVersionUID = -3164053990811643392L;
String message = null;
transient HazelcastInstance localInstance;
public Echo(String msg){
message = msg;
}
#Override
public String call() throws Exception {
return localInstance.toString() + message;
}
#Override
public void setHazelcastInstance(HazelcastInstance hazelcastInstance) {
this.localInstance = hazelcastInstance;
}
}
Here is the logging on local machine
Dec 17, 2013 1:03:20 PM com.hazelcast.instance.DefaultAddressPicker
INFO: Interfaces is enabled, trying to pick one address matching to one of: [162.124.194.*, 10.38.148.*]
Dec 17, 2013 1:03:20 PM com.hazelcast.instance.DefaultAddressPicker
INFO: Prefer IPv4 stack is true.
Dec 17, 2013 1:03:20 PM com.hazelcast.instance.DefaultAddressPicker
INFO: Picked Address[10.69.108.60]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
Dec 17, 2013 1:03:21 PM com.hazelcast.system
INFO: [10.69.108.60]:5701 [dev] Hazelcast Community Edition 3.1 (20131011) starting at Address[10.69.108.60]:5701
Dec 17, 2013 1:03:21 PM com.hazelcast.system
INFO: [10.69.108.60]:5701 [dev] Copyright (C) 2008-2013 Hazelcast.com
Dec 17, 2013 1:03:21 PM com.hazelcast.instance.Node
INFO: [10.69.108.60]:5701 [dev] Creating TcpIpJoiner
Dec 17, 2013 1:03:21 PM com.hazelcast.core.LifecycleService
INFO: [10.69.108.60]:5701 [dev] Address[10.69.108.60]:5701 is STARTING
Dec 17, 2013 1:03:21 PM com.hazelcast.cluster.TcpIpJoiner
INFO: [10.69.108.60]:5701 [dev] Connecting to possible member: Address[10.69.108.60]:5703
Dec 17, 2013 1:03:21 PM com.hazelcast.cluster.TcpIpJoiner
INFO: [10.69.108.60]:5701 [dev] Connecting to possible member: Address[10.69.108.60]:5702
Dec 17, 2013 1:03:21 PM com.hazelcast.nio.SocketConnector
INFO: [10.69.108.60]:5701 [dev] Connecting to /10.69.108.60:5703, timeout: 0, bind-any: true
Dec 17, 2013 1:03:21 PM com.hazelcast.nio.SocketConnector
INFO: [10.69.108.60]:5701 [dev] Connecting to /10.69.108.60:5702, timeout: 0, bind-any: true
Dec 17, 2013 1:03:21 PM com.hazelcast.cluster.TcpIpJoiner
INFO: [10.69.108.60]:5701 [dev] Connecting to possible member: Address[170.194.100.111]:5703
Dec 17, 2013 1:03:21 PM com.hazelcast.cluster.TcpIpJoiner
INFO: [10.69.108.60]:5701 [dev] Connecting to possible member: Address[170.194.100.111]:5702
Dec 17, 2013 1:03:21 PM com.hazelcast.nio.SocketConnector
INFO: [10.69.108.60]:5701 [dev] Connecting to /170.194.100.111:5703, timeout: 0, bind-any: true
Dec 17, 2013 1:03:21 PM com.hazelcast.nio.SocketConnector
INFO: [10.69.108.60]:5701 [dev] Connecting to /170.194.100.111:5702, timeout: 0, bind-any: true
Dec 17, 2013 1:03:21 PM com.hazelcast.cluster.TcpIpJoiner
INFO: [10.69.108.60]:5701 [dev] Connecting to possible member: Address[170.194.100.111]:5701
Dec 17, 2013 1:03:21 PM com.hazelcast.nio.SocketConnector
INFO: [10.69.108.60]:5701 [dev] Connecting to /170.194.100.111:5701, timeout: 0, bind-any: true
Dec 17, 2013 1:03:22 PM com.hazelcast.nio.SocketConnector
INFO: [10.69.108.60]:5701 [dev] Could not connect to: /10.69.108.60:5703. Reason: SocketException[Connection refused: connect to address /10.69.108.60:5703]
Dec 17, 2013 1:03:22 PM com.hazelcast.nio.SocketConnector
INFO: [10.69.108.60]:5701 [dev] Could not connect to: /10.69.108.60:5702. Reason: SocketException[Connection refused: connect to address /10.69.108.60:5702]
Dec 17, 2013 1:03:22 PM com.hazelcast.nio.SocketConnector
INFO: [10.69.108.60]:5701 [dev] Could not connect to: /170.194.100.111:5703. Reason: SocketException[Connection refused: connect to address /170.194.100.111:5703]
Dec 17, 2013 1:03:22 PM com.hazelcast.nio.SocketConnector
INFO: [10.69.108.60]:5701 [dev] Could not connect to: /170.194.100.111:5702. Reason: SocketException[Connection refused: connect to address /170.194.100.111:5702]
Dec 17, 2013 1:03:22 PM com.hazelcast.nio.SocketConnector
INFO: [10.69.108.60]:5701 [dev] Could not connect to: /170.194.100.111:5701. Reason: SocketException[Connection refused: connect to address /170.194.100.111:5701]
Dec 17, 2013 1:03:23 PM com.hazelcast.cluster.TcpIpJoiner
INFO: [10.69.108.60]:5701 [dev]
Members [1] {
Member [10.69.108.60]:5701 this
}
Dec 17, 2013 1:03:23 PM com.hazelcast.core.LifecycleService
INFO: [10.69.108.60]:5701 [dev] Address[10.69.108.60]:5701 is STARTED
HazelcastInstance{name='_hzInstance_1_dev', node=Address[10.69.108.60]:5701}Hey youuuu!
The logging on the remote machine is on these lines.Couldnt paste all the logging.Managed to get the important part.
INFO: [10.69.108.60]:5701 [dev] Connecting to possible member: Address[10.38.148.60]:5703
Dec 17, 2013 1:03:21 PM com.hazelcast.cluster.TcpIpJoiner
INFO: [10.69.108.60]:5701 [dev] Connecting to possible member: Address[10.38.148.60]:5702
Dec 17, 2013 1:03:21 PM com.hazelcast.cluster.TcpIpJoiner
INFO: [10.69.108.60]:5701 [dev] Connecting to possible member: Address[10.38.148.60]:5701
Dec 17, 2013 1:03:21 PM com.hazelcast.cluster.TcpIpJoiner
INFO: [10.69.108.60]:5701 [dev] Connecting to possible member: Address[170.194.100.111]:5703
Dec 17, 2013 1:03:21 PM com.hazelcast.cluster.TcpIpJoiner
INFO: [10.69.108.60]:5701 [dev] Connecting to possible member: Address[170.194.100.111]:5702
Dec 17, 2013 1:03:21 PM com.hazelcast.nio.SocketConnector
Members [2] {
Member [10.69.108.60]:5701 this
Member [170.194.100.111]:5701
}
Instead of creating a member instance directly, could you get the member instance using the hz.getCluster().getMembers() method and select the one you want to send to? I want to see if it is caused by the way you are creating that member.
I can't format text in comments, so I'll put another answer.
So the problem you are suffering from is why your members don't form a cluster.
You should see logging like:
Members [2] {
Member [192.168.1.104]:5701 this
Member [192.168.1.104]:5702
}
That is why I need more logging that just your stacktrace, but that currently doesn't provide any more value. I need to see what Hazelcast says about joining other clusters.
I need to see something like this:
Dec 17, 2013 7:24:13 PM com.hazelcast.config.XmlConfigBuilder
INFO: Looking for hazelcast.xml config file in classpath.
Dec 17, 2013 7:24:13 PM com.hazelcast.config.XmlConfigBuilder
WARNING: Could not find hazelcast.xml in classpath.
Hazelcast will use hazelcast-default.xml config file in jar.
Dec 17, 2013 7:24:13 PM com.hazelcast.config.XmlConfigBuilder
INFO: Using configuration file /java/projects/Hazelcast/hazelcast/hazelcast/target/classes/hazelcast-default.xml in the classpath.
Dec 17, 2013 7:24:13 PM com.hazelcast.instance.DefaultAddressPicker
INFO: Prefer IPv4 stack is true.
Dec 17, 2013 7:24:13 PM com.hazelcast.instance.DefaultAddressPicker
INFO: Picked Address[192.168.1.102]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
Dec 17, 2013 7:24:13 PM com.hazelcast.system
INFO: [192.168.1.102]:5701 [dev] [3.2-SNAPSHOT] Hazelcast Community Edition 3.2-SNAPSHOT (20131217) starting at Address[192.168.1.102]:5701
Dec 17, 2013 7:24:13 PM com.hazelcast.system
INFO: [192.168.1.102]:5701 [dev] [3.2-SNAPSHOT] Copyright (C) 2008-2013 Hazelcast.com
Dec 17, 2013 7:24:13 PM com.hazelcast.instance.Node
INFO: [192.168.1.102]:5701 [dev] [3.2-SNAPSHOT] Creating MulticastJoiner
Dec 17, 2013 7:24:13 PM com.hazelcast.core.LifecycleService
INFO: [192.168.1.102]:5701 [dev] [3.2-SNAPSHOT] Address[192.168.1.102]:5701 is STARTING
Dec 17, 2013 7:24:15 PM com.hazelcast.cluster.MulticastJoiner
INFO: [192.168.1.102]:5701 [dev] [3.2-SNAPSHOT]
Members [1] {
Member [192.168.1.102]:5701 this
}
Dec 17, 2013 7:24:16 PM com.hazelcast.core.LifecycleService
INFO: [192.168.1.102]:5701 [dev] [3.2-SNAPSHOT] Address[192.168.1.102]:5701 is STARTED
Dec 17, 2013 7:24:16 PM com.hazelcast.partition.PartitionService
INFO: [192.168.1.102]:5701 [dev] [3.2-SNAPSHOT] Initializing cluster partition table first arrangement...
I was also getting the same error.
It is due to this line in echoonthemember function
HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);
It creates the new hazelcast instance of default or of cfg config. The member is searched in this instance of the hazelcast which is not actually present in it. Thus the error message is displayed as
TargetNotAMemberException.
To have it work properly, just pass the created instance in the echoonthemember function.
E.g by making it the member variable in class DistributedExecutor and setting it by the constructor.
then if your actual instance was 'abcdef', then
use as
IExecutorService executorService = abcdef.getExecutorService("default");
do not create new Hazelcast instance.

The MongoDB process is shutting down each day. how run mongod forever in the server?

I am beginner in MongoDB and I have a problem with the execution of this in the server.
My project is hosted in servers of hostmonster.com but they don't give me support for MongoDB data bases, although they say that I can install it under my own responsability.
Then, I installed MongoDB 2.4.1 without problems into Linux 64, after, in the MongoDB bin folder (with: mongo, mongod, mongodump ... ) I created a folder called 'data' and 'data/db' for doing some tests.
from console, I connect to the server across the SSH protocol and I run
./mongod --dbpath 'data/db'
and it works.
But, I need that it run automatically forever.
I followed the steps of Mongodb can't start and run the next line:
./mongod --fork --dbpath 'data/db' --smallfiles --logpath 'data/mongodb.log' --logappend
It also worked, It started the process and I closed the console, this process continued running and I could view my data across my domain.
The problem is that the process takes a day to close, ie, I can't see my data across domain, then, I need run mongod again. with:
./mongod --fork --dbpath 'data/db' --smallfiles --logpath 'data/mongodb.log' --logappend
I don't want do it everyday, my question is:
What may be the problem?, why the mongod process dies each day?
how can I run the process forever?
Sorry for my English.
Edit: Add the last error log. I don't understand it.
Fri Apr 12 03:19:34.577 [TTLMonitor] query local.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:0 keyUpdates:0 locks(micros) r:141663 nreturned:0 reslen:20 141ms
Fri Apr 12 03:19:34.789 [TTLMonitor] query users.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:3 keyUpdates:0 locks(micros) r:211595 nreturned:0 reslen:20 211ms
Fri Apr 12 03:20:57.869 [PeriodicTask::Runner] task: DBConnectionPool-cleaner took: 18215ms
Fri Apr 12 03:20:57.931 [PeriodicTask::Runner] task: WriteBackManager::cleaner took: 8ms
Fri Apr 12 03:22:14.155 [PeriodicTask::Runner] task: DBConnectionPool-cleaner took: 32ms
Fri Apr 12 03:22:14.215 [PeriodicTask::Runner] task: WriteBackManager::cleaner took: 14ms
Fri Apr 12 03:22:30.670 [TTLMonitor] query actarium.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:2 keyUpdates:0 locks(micros) r:430204 nreturned:0 reslen:20 430ms
Fri Apr 12 03:23:14.825 [PeriodicTask::Runner] task: DBConnectionPool-cleaner took: 7ms
Fri Apr 12 03:23:31.133 [TTLMonitor] query actarium.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:2 keyUpdates:0 locks(micros) r:179175 nreturned:0 reslen:20 168ms
Fri Apr 12 03:25:19.201 [PeriodicTask::Runner] task: WriteBackManager::cleaner took: 505ms
Fri Apr 12 03:25:23.370 [TTLMonitor] query local.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:0 keyUpdates:0 locks(micros) r:3604735 nreturned:0 reslen:20 3604ms
Fri Apr 12 03:25:25.294 [TTLMonitor] query users.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:3 keyUpdates:0 numYields: 1 locks(micros) r:3479328 nreturned:0 reslen:20 1882ms
Fri Apr 12 03:26:26.647 [TTLMonitor] query actarium.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:2 keyUpdates:0 numYields: 1 locks(micros) r:1764712 nreturned:0 reslen:20 1044ms
Fri Apr 12 04:09:27.804 [TTLMonitor] query actarium.system.indexes query: { expireAfterSeconds: { $exists: true } } ntoreturn:0 ntoskip:0 nscanned:2 keyUpdates:0 locks(micros) r:200919 nreturned:0 reslen:20 200ms
Fri Apr 12 04:43:54.002 got signal 15 (Terminated), will terminate after current cmd ends
Fri Apr 12 04:43:54.151 [interruptThread] now exiting
Fri Apr 12 04:43:54.151 dbexit:
Fri Apr 12 04:43:54.157 [interruptThread] shutdown: going to close listening sockets...
Fri Apr 12 04:43:54.160 [interruptThread] closing listening socket: 9
Fri Apr 12 04:43:54.160 [interruptThread] closing listening socket: 10
Fri Apr 12 04:43:54.160 [interruptThread] closing listening socket: 11
Fri Apr 12 04:43:54.160 [interruptThread] removing socket file: /tmp/mongodb-27017.sock
Fri Apr 12 04:43:54.160 [interruptThread] shutdown: going to flush diaglog...
Fri Apr 12 04:43:54.160 [interruptThread] shutdown: going to close sockets...
Fri Apr 12 04:43:54.176 [interruptThread] shutdown: waiting for fs preallocator...
Fri Apr 12 04:43:54.176 [interruptThread] shutdown: lock for final commit...
Fri Apr 12 04:43:54.176 [interruptThread] shutdown: final commit...
Fri Apr 12 04:43:54.176 [interruptThread] shutdown: closing all files...
Fri Apr 12 04:43:54.212 [interruptThread] closeAllFiles() finished
Fri Apr 12 04:43:54.220 [interruptThread] journalCleanup...
Fri Apr 12 04:43:54.246 [interruptThread] removeJournalFiles
Fri Apr 12 04:43:54.280 [interruptThread] error removing journal files
boost::filesystem::directory_iterator::construct: No such file or directory: "/home2/anuncio3/bin/mongodb-linux-x86_64-2.4.1/bin/data/db/journal"
Fri Apr 12 04:43:54.280 [interruptThread] error couldn't remove journal file during shutdown boost::filesystem::directory_iterator::construct: No such file or directory: "/home2/anuncio3/bin/mongodb-linux-x86_64-2.4.1/bin/data/db/journal"
Fri Apr 12 04:43:54.285 shutdown failed with exception
Fri Apr 12 04:43:54.285 dbexit: really exiting now
Your answer is here:
Fri Apr 12 04:43:54.002 got signal 15 (Terminated), will terminate after current cmd ends
Fri Apr 12 04:43:54.151 [interruptThread] now exiting
Your process is receiving signal 15, which is the default kill signal. It's possible that their systems are automatically killing long-running processes or something similar. If that is indeed what's happening, then your host would have to resolve that.
Additionally, these errors:
Fri Apr 12 04:43:54.280 [interruptThread] error removing journal files
boost::filesystem::directory_iterator::construct: No such file or directory: "/home2/anuncio3/bin/mongodb-linux-x86_64-2.4.1/bin/data/db/journal"
Fri Apr 12 04:43:54.280 [interruptThread] error couldn't remove journal file during shutdown boost::filesystem::directory_iterator::construct: No such file or directory: "/home2/anuncio3/bin/mongodb-linux-x86_64-2.4.1/bin/data/db/journal"
indicate that something is wrong with your install's data directory. The journal files either don't exist, or are going missing; if some process on the system is trying to clean things up, then it wouldn't surprise me if something is nuking your journal files.
I know this is old question but my experience might be helpful for other reviewers.
Based on my tests, They only let you run a program for 5 minutes (sometimes more than this) before killing it, so it’s fairly useless to install MongoDB unless you have a dedicated IP.

Resources