Elasticsearch inaccesible from browser - browser

I have installed elasticsearch and marvel and am able to access elasticsearch through curl.
This is what I get when i curl to PUBLIC_DNS:9200
{
"status" : 200,
"name" : "Ares",
"version" : {
"number" : "1.1.1",
"build_hash" : "f1585f096d3f3985e73456debdc1a0745f512bbc",
"build_timestamp" : "2014-04-16T14:27:12Z",
"build_snapshot" : false,
"lucene_version" : "4.7"
},
"tagline" : "You Know, for Search"
}
and this when I curl to PUBLIC_DNS:9200/_plugin/marvel/
<head><meta http-equiv="refresh" content="0; URL=/_plugin/marvel/"></head>
How is it possible for me to access Elasticsearch through browser. The installation guide says it should be available through browser on localhost:9200
The server is running on an AWS instance will port 9200 added to the securitygroup.

In you comment you have mentioned that curl -XGET SERVER_DGNS:9200 works. From where you were trying that? From the server itself(in AWS) or from your local system. Where is the browser you were trying to access the URL from?
What is the value you have set on "network.bind_host" and "network.host" in your elasticsearch.yml config? Using default configuration elasticsearch is accessible from anywhere. But for security reasons many people bind it to localhost or the intranet ip to restrict access to outside.

To access the Elasticsearch server from the other browser you need to enable the firewall of the server by running the below commands
To check the current status run - sudo ufw status verbose
To enable the firewall for accessing the elasticsearch from anywhere, run - ufw enable.

Related

ASP.NET Core 6 debug using device name not just local host

I have created a basic WebAPI with ASP.NET Core 6 and Visual Studio.
I just get a main route that returns "Hello world", and modified port to use 8888
When I debug this, I can get my expected string if I use localhost:8888 however I'd like to make it work to when I run device:8888 (device been my machine's name).
Seems I am using a Kestrel server. I've tried a few things, but were not working for me:
How do I get the kestrel web server to listen to non-localhost requests?
How to specify the port an ASP.NET Core application is hosted on?
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel/endpoints?view=aspnetcore-7.0
Any ideas on how can I call http://device:8888 with my debug server running?
Firstly, I changed the launchsetting.json to change the applicationUrl as a custom name. Then I run the API project and get the error below. It indicates a DNS error.
So I modify the host file to add 127.0.0.1 mycustomname and this time it worked for me.
I used the following:
Google Chrome redirecting localhost to https - to stop automatically converting localhost into https
and then https://learn.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel/endpoints?view=aspnetcore-7.0 - a few ways of doing this:
using dot net run --urls=http://*:8888 (defines any device and port 8888)
using config, adding the following in appsettings.json
"Kestrel": {
"Endpoints": {
"Http": {
"Url": "http://localhost:5000"
},
"Https": {
"Url": "https://localhost:5001"
}
}
}
The doc also defines the use of ASPNETCORE_URLS but I was not able to make it work.

Implemented keycloak on aws ec2 windows instance and it's running inside the instance only but not running outside the instance

I'm using keycloak 15.1.1 version and 64 bit windows ec2 instance.
Downloaded RDP from aws and using RDP I logged into instance and added keycloak and mysql connecor 8.0.31 to connect keycloak with external database.
I referred from here : https://jbjerksetmyr.medium.com/how-to-setup-a-keycloak-server-with-external-mysql-database-on-aws-ecs-fargate-in-clustered-mode-9775d01cd317
I did all as here mentioned.
Note : To run keycloak they(link) used "standalone.sh" command but for windows instance :standalone" is enough.
So I ran keycloak using following command
command 1 : "standalone -b 172.31.35.208" This is my private ip
It's running successfully on 172.31.35.208:9090
Here 9090 port is mapped with 8080 so my keycloak currently running on 9090 port. But if I use this same address outside of the instance it's showing error "This site can’t be reached"
command 2 : "standalone -b http://ec2-35-180-74-78.eu-west-3.compute.amazonaws.com" this is my public DNS
running inside instance
It's running successfully on http://ec2-35-180-74-78.eu-west-3.compute.amazonaws.com:9090/
But if I use this same address outside of the instance it's showing error "This site can’t be reached"
running outside instance
Note : Here -b means, I am telling to my instance that run keycloak on my private Ip or on my public DNS
I tried this : Edited inbound rules to add port 8080 and 9090 to clear traffic but it doesn't work
Anyone can help me to solve this issue
Keycloak service is not accessible from public browser but it's only on inside the instance
Help me to find out the issue
Issue fixed. That issue came because of version 15. Then I used version 19 it's working now

connection refused connecting to remote mongodb server

So we've accumulated enough applications in our network that use MongoDB to justify building a dedicated server specifically for MongoDB. Unfortunately, I'm pretty new to mongodb (coming from SQL/MySQL derivatives). I have followed several guides on installing and configuring mongodb for my environment. None are perfect, but I think I'm close... I've have managed to get to a point that I can connect to the db server from the local server using the following command:
mongo -u user 127.0.0.1/admin
However, I'm NOT able to connect to the server using this from either the local OR a remote computer using it's network address, IE:
mongo -u user 192.168.24.102/admin
I've tried both with authentication enabled and disabled, and I've tried setting the bindIP to 192.168.24.102 and 0.0.0.0 with no love. Thinking it was a Firewall issue, I disabled the firewall entirely... same. no love...
so what's the secret sauce? how do I connect to a MongoDB server remotely?
Some notes to know: This server is on a local network only. There will be some NAT shenanigans at some point directing public traffic to it from remote application servers, but only specific ports (we will NOT be using 27017 when that happens) and it will sit behind a pretty robust firewall appliance, so I'm not worried about securing the server as I about securing MongoDB itself.
This answer assume a setup where a Linux server is completely remote and has MongoDB already installed.
Steps:
1. Connect to your remote server over SSH.
ssh <userName>#<server-IP-address>
2. Start Mongo shell and add users to MongoDB.
Add the admin;
use admin
db.createUser(
{
user: "AdminSammy",
pwd: "AdminSammy'sSecurePassword",
roles: [
{"userAdminAnyDatabase",
"dbAdminAnyDatabase",
"readWriteAnyDatabase"}
]
}
)
Then add general user/users. Users are added to specific databases.
use some_db
db.createUser({
user: 'userName',
pwd: 'secretPassword',
roles: [{ role: 'readWrite', db:'some_db'}]
})
3. Edit your MongoDB config file, mongod.conf, that is found in etc directory.
sudo vim /etc/mongod.conf
Scroll down to the #security: section and add the following line. Make sure to un-comment the security: line.
security:
authorization: 'enabled'
After authorization has been enabled only those authenticated with password will access the database. In this case these are the ones added in step 2 above.
Note: Visual Studio code can also be used over SSH to edit the mongo.conf file.
4. Add remote server's IP address to mongod.conf file.
Look for the net line and add the IP address of the server that is hosting this MongoDB installation, example 178.45.55.88
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1, 178.45.55.88
5. Open port 27017 on your server instance.
This allows access to your MongoDB server from anywhere in the world to anyone who knows your remote server IP address. This is one reason to have authenticated users. More robust ways of handling security are really important! Consult MongoDB manual for that.
Check firewall status using ufw.
sudo ufw status
If its not active, activate it.
sudo ufw enable
Then,
sudo ufw allow 27017
Important: You also need to allow port 22 for your SSH communication with your remote server. Otherwise you will be locked out from your remote server. Assumption here is that SSH uses port 22 for communication, the default.
sudo ufw allow 22
6. Restart Mongo daemon (mongod)
sudo systemctl restart mongod
7. Connect to remote Mongo server using Mongo shell
You can now connect to the remote MongoDB server using the following command.
mongo -u <user-name> -p <user-password> <remote-server-IP-address>:<mongo-server-port>
You can also connect to the remote MongoDB server with authentication:
mongo -u <user-name> -p <user-password> <remote-server-IP-address>:<mongo-server-port> --authenticationDatabase <auth-db-name>
You can also connect to a specific remote MongoDB database with authentication:
mongo -u <user-name> -p <user-password> <remote-server-IP-address>:<mongo-server-port>/<db-name> --authenticationDatabase <auth-db-name>
At this moment you can read and write within the some_db database from your local computer without ssh.
Important: Put into consideration the standard security measures for any database. Local security practices should guide what to do at any of the above steps.

iis error: localhost refused to connect, ERR_CONNECTION_REFUSED

I am installing a server using IIS 7, and I succeded to install it, and access it by localhost:80 or [my IP]:80. But when I use my smartphone, when I access to [my IP]:80, I get "localhost refused to connect: ERR_CONNECTION_REFUSED". What should I do to solve this error?
I tried to restart IIS, but it didn't work.
I tried to check the log file, but there was no error.
According to your description, I guess you may not add the ip address binding for your router.
I suggest you could try to add the both public ip and vitual ip into the binding and make sure your domain is right.
More details about how to add the binding, you could refer to below image:

Changing the default Gitlab port

I have installed the latest Gitlab-CE (8.10) on CentOS 7 (fresh install) via the Omnibus package as described here: https://about.gitlab.com/downloads/#centos7
Now, I would like to change the default port at which one can access the Gitlab web interface. To this end, I followed the instructions at http://docs.gitlab.com/omnibus/settings/nginx.html#change-the-default-port-and-the-ssl-certificate-locations, namely I included
external_url "http://127.0.0.1:8765"
in the configuration file /etc/gitlab/gitlab.rb and then updated the configuration with gitlab-ctl reconfigure && gitlab-ctl restart.
However, when I then navigate to http://127.0.0.1:8765, Gitlab keeps redirecting to http://127.0.0.1/users/sign_in, i.e., the port specification is somehow discarded. When I then manually change the URL in the browser to http://127.0.0.1:8765/users/sign_in, it correctly displays the login page and interestingly, all links on the page (e.g., "Explore", "Help") contain the port specification.
In order to fix this behavior, is it necessary to specify the port also somewhere else than in /etc/gitlab/gitlab.rb?
Issue here: https://gitlab.com/gitlab-org/gitlab-ce/issues/20131
Workaround:
add this line to /etc/gitlab/gitlab.rb:
nginx['proxy_set_headers'] = { "X-Forward-Port" => "8080", "Host" => "<hostname>:8080" }
replace port and hostname with your values, then as root or with sudo:
gitlab-ctl reconfigure
gitlab-ctl restart
It helps me on Debian 8.5, gitlab-ce from gitlab repo.
In addition of external_url, the documentation also suggests to set a few NGiNX proxy headers:
By default, when you specify external_url, omnibus-gitlab will set a few NGINX proxy headers that are assumed to be sane in most environments.
For example, omnibus-gitlab will set:
"X-Forwarded-Proto" => "https",
"X-Forwarded-Ssl" => "on"
(if you have specified https schema in the external_url).
However, if you have a situation where your GitLab is in a more complex setup like behind a reverse proxy, you will need to tweak the proxy headers in order to avoid errors like The change you wanted was rejected or Can't verify CSRF token authenticity Completed 422 Unprocessable.
This can be achieved by overriding the default headers, eg. specify in /etc/gitlab/gitlab.rb:
nginx['proxy_set_headers'] = {
"X-Forwarded-Proto" => "http",
"CUSTOM_HEADER" => "VALUE"
}
Save the file and reconfigure GitLab for the changes to take effect.
This way you can specify any header supported by NGINX you require.
The OP ewcz confirms in the comments:
I just uncommented the default settings for nginx['proxy_set_headers'] in /etc/gitlab/gitlab.rb (also, changing X-Forwarded-Proto to http and removing X-Forwarded-Ssl) and suddenly it works!

Resources