VieleRETS and Rapattoni RETS Server - web

I am using vieleRETS as RETS client. I have all the information regarding Rapattoni RETS server such as userID, password and Agent-header.
I have requested the ISP to open up the port 6103. The support team conformed the port is open. The website is hosted on a shared hosting.
As per my request the tech monitored the HTTP request on port 6103. There is no activity on that port.
I checked the RETS server with firewall_check.php within the vieleRETS extras folder.
it works. but the same test on the website failed with the following error.
FAILURE ERRNO 111 ERRSTR Connection refused
My question is if the port 6103 open on web server, will it return success?
This is the code that checks . . .
set_time_limit(0);
$socket = #fsockopen($address, $port, $errno, $errstr);

Rapattoni support is best in the west (everywhere actually, but that doesn't rhyme). It always helps if you can provide a an http capture from wire shark or fiddler . Make sure that your MLS hasn't put an IP filter on your setup. Also make sure that you are selecting version 1.7.2... It's been some months that I've been out of the RETS game, but I'm sure Tony will get you up and running... Rets#rapattoni.com should be your POC

Related

FileSync local endpoint offline

I have 3 servers (one of them with Windows Server 2012 R2 and 2 with Windows Server 2019) and I use Azure FileSync to sync files between them.
Since a few days I have a problem, the 2012 R2 server is appearing offline in the azure portal (it shows "no activity"). I tried the Test-StorageSyncNetworkConnectivity cmdlet and it fails with the following message:
Discovery service connectivity result:
Result: Success
HostUri: unknown
HostIPv4Addr: Fail. DNS name does not exist. Resolution through GetAddrInfo failed with error: 11001
HostIPv6Addr: Fail. DNS name does not exist. Resolution through GetAddrInfo failed with error: 11001
Management service connectivity result:
Result: Fail. Failed to run test
HostUri: unknown
HostIPv4Addr: Fail. DNS name does not exist. Resolution through GetAddrInfo failed with error: 11001
HostIPv6Addr: Fail. DNS name does not exist. Resolution through GetAddrInfo failed with error: 11001
HostNetworkLatency [min,avg,max]: Network Latency Request Failed.
Monitoring service connectivity result:
Result: No response from monitoring agent process.
HostUri: unknown
HostIPsAddr: IPv4 and Ipv6 addresses do not exist
ServerEndpoint: faf66731-1e22-47eb-93eb-b8d3331f0de2
SyncServiceResult:
SyncServiceHostUri:
SyncServiceHostIPsAddr: IPv4 and Ipv6 addresses do not exist
SyncServiceHostNetworkLatency: Request Failed.
ServerEndpoint: 80f3bb96-463b-4f86-9e26-8dcf0c92f915
SyncServiceResult:
SyncServiceHostUri:
SyncServiceHostIPsAddr: IPv4 and Ipv6 addresses do not exist
SyncServiceHostNetworkLatency: Request Failed.
ServerEndpoint: b9a874b4-7acd-4174-b5e8-26ac23c84c7e
SyncServiceResult:
SyncServiceHostUri:
SyncServiceHostIPsAddr: IPv4 and Ipv6 addresses do not exist
SyncServiceHostNetworkLatency: Request Failed.
Remediation Steps
For Azure File Sync to work correctly, you will need to configure your servers to communicate with multiple Azure servic
es
Refer the following public document for details on proxy settings or firewall settings for Azure File Sync - https://aka
.ms/AFS/ProxyAndFirewall
If you have configured a private endpoint refer the following public document for configuring private endpoint for Azure
File Sync - https://aka.ms/AFS/PrivateEndpoint
NetworkTestPassed Report
----------------- ------
False ...
The problem seems to be DNS related, but I tried the Test-NetConnection -ComputerName <remote-host> -Port 443 cmdlet with the correct URLs (taken from https://learn.microsoft.com/it-it/azure/storage/file-sync/file-sync-firewall-and-proxy#test-network-connectivity-to-service-endpoints) and all the endpoints seems to be working fine (the ping is failing but I think that is regular behavior. E.g.:
PS C:\Program Files\Azure\StorageSyncAgent> Test-NetConnection -ComputerName tm-kailani7.one.microsoft.com -Port 443
AVVISO: Ping to tm-kailani7.one.microsoft.com failed -- Status: TimedOut
ComputerName : tm-kailani7.one.microsoft.com
RemoteAddress : 20.38.85.153
RemotePort : 443
InterfaceAlias : Ethernet 2
SourceAddress : 192.168.0.185
PingSucceeded : False
PingReplyDetails (RTT) : 0 ms
TcpTestSucceeded : True
I also tried the FileSyncErrorsReport.ps1 but even that doesn't give me any error:
AVVISO: There are no file sync errors to report. Either the last completed sync session did not have per-item errors or
the ItemResults event log on the server wrapped due to too many per-item errors and the event log no longer contains
errors for this sync group. To learn more, see the Azure File Sync troubleshooting documentation:
https://aka.ms/AFS/FileSyncErrorReport
I think the problem lies with the fact that the AzureStorageSyncMonitor.exe process is not running and if i try to run it manually it just closes itself after a few seconds.
I've got no event ID 9301 (specified here: https://learn.microsoft.com/it-it/azure/storage/file-sync/file-sync-troubleshoot?tabs=portal1%2Cazure-portal#server-endpoint-health) and by searching in the other folder of eventvwr i could only find the event 4104 which shows me some error dated to the last time the server has reached the Azure endpoint:
Querying for new jobs failed.
HttpErrorCode: 0x80C8700C
InternalErrorCode: 0x80C80300
Any help would be greatly appreciated, thank you.
• Kindly please check the event ID 9302 in the ‘FileSync’ telemetry logs under ‘Application and Service Logs’ in the event viewer for the active sync sessions logged every 5 to 10 minutes and check whether it is making any progress as the ‘AzureStorageSyncMonitor.exe’ utility synchronizes the status of the Server endpoint to the storage sync service in the portal.
• You can also check the ‘Perfmon.msc’, i.e., performance counter which is built-in to the Azure File Sync to monitor the sync activity locally on the server.
• Finally, please check the Server’s configured IP address settings too as you are encountering the DNS resolution issue while trying to execute the ‘Test-StorageSyncNetworkConnectivity’ command. In the IP address settings, please check whether the configured DNS server IP addresses (Preferred and Secondary) are configured correct and are reachable.
Also, check the ‘localhosts’ file in the ‘C:\Windows\System32\drivers\etc’ path whether it contains the correct IP address of the server, i.e., Windows Server 2012 R2 and its expected DNS hostname as various services on the server itself including the ‘AzureStorageSyncMonitor’ refer the ‘localhosts’ file for sending DNS requests to the connected/configured external services and for communicating between the internal services also.
• Finally, would suggest you to please disable negative caching on the DNS client, put the suffix with the matching host A record as the last entry in the suffix search list and use the ‘AF_UNSPEC’ for the family and let your code determine the ‘A/AAAA’ results for you.
For more detailed information on this, kindly refer to the below link: -
https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/getaddrinfo-fails-error-11001-call-af-inet6-family#workaround

Blazor server side app on IIS frequently disconnects WebSocket connection

I have a Blazor server side app published on IIS 10.
When browsing to an arbitrary page and just letting it idle after a minute or so (sometimes only 45 sec, sometimes something between 1 and two minutes) the modal
Attempting to reconnect to server ...
appears for a couple of seconds.
In the browser console the logging shows either
Error: Connection disconnected with error 'Error: Server timeout
elapsed without receiving a message from the server.'.
or
Information: Connection disconnected.
Since this seems to be a timeout problem I added the following options to ConfigureServices in my startup.cs
services.AddServerSideBlazor()
.AddHubOptions(options =>
{
options.ClientTimeoutInterval = TimeSpan.FromMinutes(10);
options.KeepAliveInterval = TimeSpan.FromSeconds(3);
options.HandshakeTimeout = TimeSpan.FromMinutes(10);
});
This does not solve the problem though.
I also went to the advanced settings of my site in IIS and increased the connection timeout from the default 120 sec to 600 sec. This did not help either.
Those frequent disconnections only happen on the live site hosted on IIS 10.
If I start the app locally with Visual Studio the connection is stable.
Any hints of what I'm missing would be appreciated!
Update:
As suggested by #agua from mars in comment below I changed transport type like this
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
endpoints.MapBlazorHub(options => { options.Transports = HttpTransportType.LongPolling; });
endpoints.MapFallbackToPage("/_Host");
});
With this change the connection is still closed. The console log shows
Information: (LongPolling transport) Poll terminated by server.
I also tried HttpTransportType.ServerSentEvents which does not work at all but gives this error
Error: Failed to start the connection: Error: Unable to connect to the
server with any of the available transports. ServerSentEvents failed:
Error: 'ServerSentEvents' does not support Binary.
Update 2:
The IIS is configured to use HTTP 1.1
I tried changing to HTTP/2 but this did not change anything regarding the disconnections.
This is related to application pool recycling in IIS as stated by #Programmer. You can reproduce this by going into the application pool, right click the pool and choose recycle to force it. Your blazor app will get the "reconnect modal screen".
For me, I did not want to disable pool recycle, so I added js in the _Hosts.cshtml file as
<script>Blazor.defaultReconnectionHandler._reconnectCallback = function (d) {document.location.reload();}</script>
to automatically reconnect when the server comes back up.
Try this out..
app.UseEndpoints(endpoints =>
{
//other settings
.
.
endpoints.MapBlazorHub(options => options.WebSockets.CloseTimeout = new TimeSpan(1, 1, 1));
//other settings
.
.
});
This could be related to IIS application pool recycling. Try disabling the recycling to see if that's casing the disconnection.
I suffer the same problem on my Blazor server too: Myspector.com
I am sure this comes from network of data provider. I use Othello in Germany with 4G and see disconnection in 5 sec . When I am with wifi with t online on same target server no disconnection at all.
I Think some operators are incompatible with Blazor server/websoscket....
My recent experience especially on a shared server, increase the pool memory. Connectivity issues went away when we bumped 256MB up to 1GB for a small user base.

Atlassian-connect: Error on 'installed' event

I'm trying to run example Jira add-on.
I have created credentials.json file and have run npm i and node app.js.
But I have problems with installed event. Here is nodejs log:
Watching atlassian-connect.json for changes
Add-on server running at http://MacBook-Air.local:3000
Initialized sqlite3 storage adapter
Local tunnel established at https://a277dbdf.ngrok.io/
Check http://127.0.0.1:4040 for tunnel status
Registering add-on...
GET /atlassian-connect.json 200 13.677 ms - 784
Saved tenant details for 608ff294-74b9-3edf-8124-7efae2c16397 to database
{ key: 'my-add-on',
clientKey: '608ff294-74b9-3edf-8124-7efae2c16397',
publicKey: 'MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCtKxrEBipTMXhRHlv9zcSLR2Y9h5YQgNQ5vpJ40tF9RmuIzByjkKTurCLHFwMAWU6aLQM+H+Z8wAlpL9AVlN5NKrEP8+a3mGFUOj/5nSJ7ZWHjgju0sqUruyEkKLvKuhWkKkd9NqBxogN0hxv7ue5msP5ezwei/nTJXmnmA5qOAQIDAQAB',
sharedSecret: 'LfT9elHM7iHkto5pHr+MnpH0SR1ypunIDoCyt6ugVJ1Q4hWHurG8k5DjVzLcvT2C98DDbiJiA89VNB0e3DiUvQ',
serverVersion: '100075',
pluginsVersion: '1.3.407',
baseUrl: 'https://gleb-olololololo-22.atlassian.net',
productType: 'jira',
description: 'Atlassian JIRA at https://gleb-olololololo-22.atlassian.net ',
eventType: 'installed' }
POST /installed?user_key=admin 204 51.021 ms - -
Failed to register with host https://gleb-olololololo-22%40yopmail.com:gleb-olololololo-22#gleb-olololololo-22.atlassian.net (200)
The add-on host did not respond when we tried to contact it at "https://a277dbdf.ngrok.io/installed" during installation (the attempt timed out). Please try again later or contact the add-on vendor.
{"type":"INSTALL","pingAfter":300,"status":{"done":true,"statusCode":200,"contentType":"application/vnd.atl.plugins.task.install.err+json","errorMessage":"The add-on host did not respond when we tried to contact it at \"https://a277dbdf.ngrok.io/installed\" during installation (the attempt timed out). Please try again later or contact the add-on vendor.","source":"https://a277dbdf.ngrok.io/atlassian-connect.json","name":"https://a277dbdf.ngrok.io/atlassian-connect.json"},"links":{"self":"/rest/plugins/1.0/pending/80928cb9-f64e-42d0-9a7e-a1fe8ba81055","alternate":"/rest/plugins/1.0/tasks/80928cb9-f64e-42d0-9a7e-a1fe8ba81055"},"timestamp":1513692335651,"userKey":"admin","id":"80928cb9-f64e-42d0-9a7e-a1fe8ba81055"}
Add-on not registered; no compatible hosts detected
I have reviewed tons of information in Google, but didn't found an answer.
More details, that can helps you to answer.
It happens suddenly. It worked OK, but about 1 week ago I start to get this error and cannot fix it. So I didn't change anything, just run add-on again, as I did it every day.
If I try to upload add-on manually I got error in terminal
GET / 302 17.224 ms - 0
GET /atlassian-connect.json 200 2.503 ms - 783
Found existing settings for client 608ff294-74b9-3edf-8124-7efae2c16397. Authenticating reinstall request
Authentication verification error: 401 Could not find authentication data on request
POST /installed?user_key=admin 401 22.636 ms - 45
The most possible reason (that I've found in google) is that I have wrong server time. But the time on my local machine is correct (at least for my timezone).
Anyone has any thoughts about this problem?
Thanks!
I kept randomly having this happen to me. It would be working, then run npm start and I would get the error. Since I'm not using a database right now, I simply removed all references to the juggling-sqlite database. This was in package.json, package-lock.json, config.json, and I just removed store.db. That got it working for me. Pretty frustrating that this happens, not sure a better way around it.

Error reaching the Node.js server in drupal 7

I have installed node.js server on shared hosting.I have drupal site in which I am using node.js integration module to connect to node.js server.
But whenever I am trying to broadcast message from admin panel, I am getting this error message "Error reaching the Node.js server "Error reaching the Node.js server at "nodejs/publish" with {"data":{"somecustomdata":"http://www.google.ca"},"channel":"nodejs_user_1","callback":"myowncallback","clientSocketId":""} "%{"data":{"somecustomdata":"http://www.google.ca"},"channel":"nodejs_user_1","callback":"myowncallback","clientSocketId":""}": [404] Not Found." in db log.
Any help would be appreciated.
It is very likely one of two things:
Drupal server is accessing wrong URI.
Node.js Server is not listening to the URI you expect it to.
Of course something less obvious might cause errors, but please verify those two before proceeding.
Best would be to get your Drupal server print in error logs the URI it is trying to access, and manually verify you can access it within your browser, or another tool.
Thanks "alandrev" for your help.I have resolved that issue on the same day but I forgot to add my mistake.Actually I was not configuring the nodejs correctly.I was using the incorrect port number on backend settings in nodejs.config.js file.The correct settings mentioned below:
backendSettings = {
"scheme":"https",
"host":"yourhostname",
"port":"port number which is not already in use",
'sslKeyPath': 'key file path for ssl enabled site otherwise leave empty',
'sslCertPath': 'certificate path for ssl enabled site otherwise leave empty',
'sslCAPath': '',
"resource":"/socket.io",
"baseAuthPath": '/nodejs/',
"publishUrl":"publish",
"serviceKey":"",
"backend":{
"port":443,
"scheme": 'https or http',
"host":"yourhostname",
"messagePath":"/nodejs/message/"},
"clientsCanWriteToChannels":false,
"clientsCanWriteToClients":false,
"extensions":"",
"debug":false,
"addUserToChannelUrl": 'user/channel/add/:channel/:uid',
"publishMessageToContentChannelUrl": 'content/token/message',
"jsMinification":true,
"jsEtag":true,
"logLevel":1};
Solved this same issue by adding "polling" to the transport
backendSettings = {
"scheme":"http",
"host":"localhost",
"port":8081,
"key":"/path/to/key/file",
"cert":"/path/to/cert/file",
"resource":"/socket.io",
"publishUrl":"publish",
"serviceKey":"SERVICE KEY",
"backend":{
"port":80,
"host":"localhost",
"messagePath":"/mysite/nodejs/message/"},
"clientsCanWriteToChannels":true,
"clientsCanWriteToClients":true,
"extensions":"",
"debug":true,
"transports":["websocket","polling",
"flashsocket",
"htmlfile",
"xhr-polling",
"jsonp-polling"],
"jsMinification":true,
"jsEtag":true,
"logLevel":1};

connect EADDRNOTAVAIL in nodejs under high load - how to faster free or reuse TCP ports?

I have a small wiki-like web application based on the express-framework which uses elastic search as it's back-end. For each request it basically only goes to the elastic search DB, retrieves the object and returns it rendered with by the handlebars template engine. The communication with elastic search is over HTTP
This works great as long as I have only one node-js instance running. After I updated my code to use the cluster (as described in the nodejs-documentation I started to encounter the following error: connect EADDRNOTAVAIL
This error shows up when I have 3 and more python scripts running which constantly retrieve some URL from my server. With 3 scripts I can retrieve ~45,000 pages with 4 and more scripts running it is between 30,000 and 37,000 pages Running only 2 or 1 scripts, I stopped them after half an hour when they retrieved 310,000 pages and 160,000 pages respectively.
I've found this similar question and tried changing http.globalAgent.maxSockets but that didn't have any effect.
This is the part of the code which listens for the URLs and retrieves the data from elastic search.
app.get('/wiki/:contentId', (req, res) ->
http.get(elasticSearchUrl(req.params.contentId), (innerRes) ->
if (innerRes.statusCode != 200)
res.send(innerRes.statusCode)
innerRes.resume()
else
body = ''
innerRes.on('data', (bodyChunk) ->
body += bodyChunk
)
innerRes.on('end', () ->
res.render('page', {'title': req.params.contentId, 'content': JSON.parse(body)._source.html})
)
).on('error', (e) ->
console.log('Got error: ' + e.message) # the error is reported here
)
)
UPDATE:
After looking more into it, I understand now the root of the problem. I ran the command netstat -an | grep -e tcp -e udp | wc -l several times during my test runs, to see how many ports are used, as described in the post Linux: EADDRNOTAVAIL (Address not available) error. I could observe that at the time I received the EADDRNOTAVAIL-error, 56677 ports were used (instead of ~180 normally)
Also when using only 2 simultaneous scripts, the number of used ports is saturated at around 40,000 (+/- 2,000), that means ~20,000 ports are used per script (that is the time when node-js cleans up old ports before new ones are created) and for 3 scripts running it breaches over the 56677 ports (~60,000). This explains why it fails with 3 scripts requesting data, but not with 2.
So now my question changes to - how can I force node-js to free up the ports quicker or to reuse the same port all the time (would be the preferable solution)
Thanks
For now, my solution is setting the agent of my request options to false this should, according to the documentation
opts out of connection pooling with an Agent, defaults request to Connection: close.
as a result my number of used ports doesn't exceed 26,000 - this is still not a great solution, even more since I don't understand why reusing of ports doesn't work, but it solves the problem for now.

Resources