Setting SFTP Sublime Text 3 for EC2 Amazon Linux Instance - linux

I have been trying to set up a remote (FTP) access to some playground files on AWS EC2 instance. Having created an FTP user and installed vsftp, I kept getting a "connection timeout" from Sublime/SFTP. I decided to try the SSH key route. Here is my server setup in SFTP. It also gets "connection timeout." What could be the reason for this? Is this on the client or server side?
Now, before anyone suggests it, I do have port 22 as my SSH port in AWS Security Groups / Inbound Rules settings.
I blanked out some entries like server, password and keyname.
// sftp, ftp or ftps
"type": "sftp",
"sync_down_on_open": true,
"sync_same_age": true,
"host": "ec2-xx-xx-xx-xxx.us-west-2.compute.amazonaws.com",
"user": "ec2",
//"password": "******",
"port": "22",
"remote_path": "/home/user/",
//"file_permissions": "664",
//"dir_permissions": "775",
//"extra_list_connections": 0,
"connect_timeout": 30,
//"keepalive": 120,
//"ftp_passive_mode": true,
//"ftp_obey_passive_host": false,
"ssh_key_file": " ~/.ssh/file.pem",
//"sftp_flags": ["-F", "~/.ssh/file.pem"],
"sftp_flags": ["-o IdentityFile=/Users/user/.ssh/file.pem"]
//"sftp_flags": ["-o", IdentityFile="/Users/user/.ssh/file.pem"],
//"preserve_modification_times": false,
//"remote_time_offset_in_hours": 0,
//"remote_encoding": "utf-8",
//"remote_locale": "C",
//"allow_config_upload": false,
}

This worked for me:
{
"type": "sftp",
"save_before_upload": true,
"upload_on_save": false,
"sync_down_on_open": false,
"sync_skip_deletes": false,
"sync_same_age": true,
"confirm_downloads": false,
"confirm_sync": true,
"confirm_overwrite_newer": false,
"host": "1.2.3.4", //IPv4 Public IP in Instances > Desription tab of EC2 Dashboard
"user": "ec2-user",
"port": "22",
"remote_path": "/var/www/html/",
"ignore_regexes": [
"\\.sublime-(project|workspace)", "sftp-config(-alt\\d?)?\\.json",
"sftp-settings\\.json", "/venv/", "\\.svn/", "\\.hg/", "\\.git/",
"\\.bzr", "_darcs", "CVS", "\\.DS_Store", "Thumbs\\.db", "desktop\\.ini"
],
"connect_timeout": 10,
"ssh_key_file": "C:/Users/me/Desktop/aws.ppk", //Generated ppk file using PuTTYgen
}

Related

Is it possible for someone log into MongoDB without the correct password if authentication is enabled?

I recently setup my first MongoDB database in an production environment. I looked up some guides for deployment and followed them.
I had the following in my config:
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
security:
authorization: "enabled"
And I created an admin user (the only user) that looks like this:
{
"_id" : "admin.admin",
"userId" : UUID("6dfe010f-1e62-4801-9c07-5a408b8c75c6"),
"user" : "admin",
"db" : "admin",
"credentials" : {
[omitted, but contains SCRAM-SHA-1 and SCRAM-SHA-256 hashes]
},
"roles" : [
{
"role" : "userAdminAnyDatabase",
"db" : "admin"
},
{
"role" : "readWriteAnyDatabase",
"db" : "admin"
},
{
"role" : "dbAdminAnyDatabase",
"db" : "admin"
}
]
I also switched the outward facing port for the database (through nginx) to a nearby port that wasn't the default.
With all of that, I still got hacked and I was greeted with this when I got onto nosqlbooster.
Fortunately for me, I wasn't storing any sensitive information (just an aggregation of data pulled from a variety of other services) and all of the data can easily be regenerated. However, I'd rather not have this type of thing happen again.
I did some digging in the logs and found the moment they connected:
{
"t": {
"$date": "2021-02-04T22:10:38.614+00:00"
},
"s": "I",
"c": "ACCESS",
"id": 20250,
"ctx": "conn75191",
"msg": "Successful authentication",
"attr": {
"mechanism": "SCRAM-SHA-256",
"principalName": "admin",
"authenticationDatabase": "admin",
"client": "127.0.0.1:39722"
}
}{
"t": {
"$date": "2021-02-04T22:11:21.521+00:00"
},
"s": "I",
"c": "NETWORK",
"id": 51800,
"ctx": "conn75192",
"msg": "client metadata",
"attr": {
"remote": "127.0.0.1:39918",
"client": "conn75192",
"doc": {
"driver": {
"name": "PyMongo",
"version": "3.11.2"
},
"os": {
"type": "Linux",
"name": "Linux",
"architecture": "x86_64",
"version": "5.8.0-41-generic"
},
"platform": "CPython 3.8.6.final.0"
}
}
}
Shortly after that login, I can see them drop my database and insert the note. Funny enough, it looks like they never saved the data, so the whole thing is obviously a scam. I also checked the auth.log for the server to ensure that nobody logged into the server itself, so I'm pretty sure they haven't tampered with the filesystem unless they did some magic through nginx.
I did some testing with authentication off and found that you can have an "authenticated" connection with the incorrect password if authentication is off. At this point, my question is: Are there any ways to get in without knowing the password if my config is set as specified above and my mongo server has been restarted since the last configuration change? Or is the only possibility that they have my password? I'm completely stumped. Around a week before the attack, I tried logging in with incorrect passwords to ensure that authorization was enabled correctly. I got denied as expected.
In case it's relevant, here's the rule for my MongoDB port in Nginx:
stream {
server {
listen 27018;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass stream_mongo_backend;
}
upstream stream_mongo_backend {
server 0.0.0.0:27017;
}
}
Yes, when authentication is enabled you can connect to the Mongo database without any credentials. However, apart from harmless commands like db.help(), db.getMongo(), db.listCommands(), db.version(), etc. you can't execute anything.
Obviously the hacker connected from localhost with valid credentials, so it looks like he got access to your machine. Maybe he read your application python script which has the password.
NB, you write only the admin user was created. You should use the admin account only for administrative task and keep the password private. The application should not run under such admin account, it should use a dedicated account having only the permissions which are required to run the application.

Attach aws emr cluster to remote jupyter notebook using sparkmagic

I am trying to connect and attach an AWS EMR cluster (emr-5.29.0) to a Jupyter notebook that I am working on my local windows machine. I have started a cluster with Hive 2.3.6, Pig 0.17.0, Hue 4.4.0, Livy 0.6.0, Spark 2.4.4 and the subnets are public. I found that this can be done with Azure HDInsight, so was hoping something similar can be done using EMR. The issue I am having is with passing the correct values in the config.json file. How should I attach a EMR cluster?
I could work on the EMR notebooks native to AWS, but thought I can go the develop locally route and have hit a road block.
{
"kernel_python_credentials" : {
"username": "{IAM ACCESS KEY ID}", # not sure about the username for the cluster
"password": "{IAM SECRET ACCESS KEY}", # I use putty to ssh into the cluster with the pem key, so again not sure about the password for the cluster
"url": "ec2-xx-xxx-x-xxx.us-west-2.compute.amazonaws.com", # as per the AWS blog When Amazon EMR is launched with Livy installed, the EMR master node becomes the endpoint for Livy
"auth": "None"
},
"kernel_scala_credentials" : {
"username": "{IAM ACCESS KEY ID}",
"password": "{IAM SECRET ACCESS KEY}",
"url": "{Master public DNS}",
"auth": "None"
},
"kernel_r_credentials": {
"username": "{}",
"password": "{}",
"url": "{}"
},
Update 1/4/2021
On 4/1, I got sparkmagic to work on my local jupyter notebook. Used these documents as a references (ref-1, ref-2 & ref-3) to setup local port forwarding (if possible avoid using sudo).
sudo ssh -i ~/aws-key/my-pem-file.pem -N -L 8998:ec2-xx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com:8998 hadoop#ec2-xx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com
Configuration details
Release label:emr-5.32.0
Hadoop distribution:Amazon 2.10.1
Applications:Hive 2.3.7, Livy 0.7.0, JupyterHub 1.1.0, Spark 2.4.7, Zeppelin 0.8.2
Updated config file
{
"kernel_python_credentials" : {
"username": "",
"password": "",
"url": "http://localhost:8998"
},
"kernel_scala_credentials" : {
"username": "",
"password": "",
"url": "http://localhost:8998",
"auth": "None"
},
"kernel_r_credentials": {
"username": "",
"password": "",
"url": "http://localhost:8998"
},
"logging_config": {
"version": 1,
"formatters": {
"magicsFormatter": {
"format": "%(asctime)s\t%(levelname)s\t%(message)s",
"datefmt": ""
}
},
"handlers": {
"magicsHandler": {
"class": "hdijupyterutils.filehandler.MagicsFileHandler",
"formatter": "magicsFormatter",
"home_path": "~/.sparkmagic"
}
},
"loggers": {
"magicsLogger": {
"handlers": ["magicsHandler"],
"level": "DEBUG",
"propagate": 0
}
}
},
"authenticators": {
"Kerberos": "sparkmagic.auth.kerberos.Kerberos",
"None": "sparkmagic.auth.customauth.Authenticator",
"Basic_Access": "sparkmagic.auth.basic.Basic"
},
"wait_for_idle_timeout_seconds": 15,
"livy_session_startup_timeout_seconds": 60,
"fatal_error_suggestion": "The code failed because of a fatal error:\n\t{}.\n\nSome things to try:\na) Make sure Spark has enough available resources for Jupyter to create a Spark context.\nb) Contact your Jupyter administrator to make sure the Spark magics library is configured correctly.\nc) Restart the kernel.",
"ignore_ssl_errors": false,
"session_configs": {
"driverMemory": "1000M",
"executorCores": 2
},
"use_auto_viz": true,
"coerce_dataframe": true,
"max_results_sql": 2500,
"pyspark_dataframe_encoding": "utf-8",
"heartbeat_refresh_seconds": 5,
"livy_server_heartbeat_timeout_seconds": 60,
"heartbeat_retry_seconds": 1,
"server_extension_default_kernel_name": "pysparkkernel",
"custom_headers": {},
"retry_policy": "configurable",
"retry_seconds_to_sleep_list": [0.2, 0.5, 1, 3, 5],
"configurable_retry_policy_max_retries": 8
}
Second update 1/9
Back to square one. Keep getting this error and spent days trying to debug. Not sure what I did previously to get things going. Also checked my security group config and it looks fine, ssh on port 22.
An error was encountered:
Error sending http request and maximum retry encountered.
Created a local port forwarding (ssh tunneling) to livy server on port 8998 and it works like magic.
sudo ssh -i ~/aws-key/my-pem-file.pem -N -L 8998:ec2-xx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com:8998 hadoop#ec2-xx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com
Did not change my config.json file from 1/4 update

How to sort / order the servers in Sublime SFTP?

I have around 30 servers which I connect to using Sublime SFTP in Linux. They appear to be randomly ordered in the File->SFTP->Browser Server menu. (I just added a new one, and it was placed in the lower third of the listings.) I have written to the developers a few times without reply.
Can someone say how the ordering works and if there is a way to change it?
For the record, I have searched the contents of all related files I could find and find no mention of any of my servers which could indicate their defined order.
The .json format of a server config file is as follows:
{
// The tab key will cycle through the settings when first created
// Visit http:wbond.net/sublime_packages/sftp/settings for help
// sftp, ftp or ftps
"type": "sftp",
"sync_down_on_open": true,
"sync_same_age": true,
"host": "example.com",
"user": "username",
"password": "password",
"port": "22",
"remote_path": "/example/path/",
"file_permissions": "664",
"dir_permissions": "775",
"extra_list_connections": 0,
"connect_timeout": 30,
"keepalive": 120,
"ftp_passive_mode": true,
"ftp_obey_passive_host": false,
"ssh_key_file": "~/.ssh/id_rsa",
"sftp_flags": ["-F", "/path/to/ssh_config"],
"preserve_modification_times": false,
"remote_time_offset_in_hours": 0,
"remote_encoding": "utf-8",
"remote_locale": "C",
"allow_config_upload": false,
}

SFTP for Sublime is uploading all files to the root instead of sub directories

Attached is the json. My coworker is able to upload his files properly, I've even tried his json config, we're using the same username/password and other creds.
So if I download a file it works fine, (for example theirdomain.com/html/resources/views/home.php), but if I upload the same file it will upload the file to theirdomain.com/html instead of the path its actually located.
{
// The tab key will cycle through the settings when first created
// Visit http://wbond.net/sublime_packages/sftp/settings for help
// sftp, ftp or ftps
"type": "ftp",
"save_before_upload": true,
"upload_on_save": false,
"sync_down_on_open": false,
"sync_skip_deletes": false,
"sync_same_age": true,
"confirm_downloads": false,
"confirm_sync": true,
"confirm_overwrite_newer": false,
"host": "host",
"user": "theiruser",
"password": "password",
//"port": "22",
"remote_path": "/domains/theirdomain.com/html/",
"ignore_regexes": [
"\\.sublime-(project|workspace)", "sftp-config(-alt\\d?)?\\.json",
"sftp-settings\\.json", "/venv/", "\\.svn/", "\\.hg/", "\\.git/",
"\\.bzr", "_darcs", "CVS", "\\.DS_Store", "Thumbs\\.db", "desktop\\.ini"
],
//"file_permissions": "664",
//"dir_permissions": "775",
//"extra_list_connections": 0,
"connect_timeout": 30,
//"keepalive": 120,
//"ftp_passive_mode": true,
//"ftp_obey_passive_host": false,
//"ssh_key_file": "~/.ssh/id_rsa",
//"sftp_flags": ["-F", "/path/to/ssh_config"],
//"preserve_modification_times": false,
//"remote_time_offset_in_hours": 0,
//"remote_encoding": "utf-8",
//"remote_locale": "C",
"allow_config_upload": true,
}

Remove properties related to Sharepoint from raw odata response

I am using Sharepoint REST API to get/modify data in Sharepoint from NodeJS.
I am getting odata response from Sharepoint REST API and everything is working as expected.
Except one thing.
Currently I am getting response from Sharepoint REST API as below
{
"odata.metadata": "https://test.sharepoint.com/_api/$metadata#SP.ApiData.Lists",
"value": [
{
"odata.type": "SP.List",
"odata.id": "https://test.sharepoint.com/_api/Web/Lists(guid'sample-guid')",
"odata.etag": "\"6\"",
"odata.editLink": "Web/Lists(guid'sample-guid')",
"AllowContentTypes": true,
"BaseTemplate": 160,
"BaseType": 0,
"ContentTypesEnabled": true,
"CrawlNonDefaultViews": false,
"Created": "2015-05-19T11:13:46Z",
"DefaultContentApprovalWorkflowId": "00000000-0000-0000-0000-000000000000",
"Description": "Use this list to track access requests to a site or uniquely permissioned items in the site.",
"Direction": "none",
"DocumentTemplateUrl": null,
"DraftVersionVisibility": 0,
"EnableAttachments": false,
"EnableFolderCreation": false,
"EnableMinorVersions": false,
"EnableModeration": false,
"EnableVersioning": true,
"EntityTypeName": "AccessRequests",
"FileSavePostProcessingEnabled": false,
"ForceCheckout": false,
"HasExternalDataSource": false,
"Hidden": true,
"Id": "sample-id",
"IrmEnabled": false,
"IrmExpire": false,
"IrmReject": false,
"IsApplicationList": false,
"IsCatalog": false,
"IsPrivate": false,
"ItemCount": 1,
"LastItemDeletedDate": "2015-05-19T11:13:46Z",
"LastItemModifiedDate": "2015-08-04T06:57:22Z",
"ListItemEntityTypeFullName": "SP.Data.AccessRequestsItem",
"MajorVersionLimit": 0,
"MajorWithMinorVersionsLimit": 0,
"MultipleDataList": false,
"NoCrawl": true,
"ParentWebUrl": "/",
"ParserDisabled": false,
"ServerTemplateCanCreateFolders": true,
"TemplateFeatureId: "sample-id",
"Title": "Test Title"
}, {
........
}]
}
In the above response I am getting fields which are related to Sharepoint System along with fields I want.
for ex: odata.type, odata.id, AllowContentTypes, BaseTemplate etc.
How do I get the fields which I required but not the other Sharepoint related fields.
Can anybody help ?
Thanks
For that purpose you could utilize $select query option (follow OData Version 2.0 for a more details).
Regarding odata.type and odata.id properties, since they are part
of metadata properties, you just need to specify the proper Accept
header in order to request them (eg: accept:
application/json;odata=minimalmetadata). For a more details follow JSON
Light support
in REST SharePoint API released article.
The following example returns a specific set of List resource properties such as AllowContentTypes and BaseTemplate:
Endpoint URI: https://contoso.sharepoint.com/_api/web/lists?$select=AllowContentTypes,BaseTemplate
Accept: application/json; odata=minimalmetadata
Method: GET
Result
{
"d": {
"results": [
{
"__metadata": {
"id": "https://contoso.sharepoint.com/_api/Web/Lists(guid'82dcfcc5-e58c-4610-b4c3-589a7228e912')",
"uri": "https://contoso.sharepoint.com/_api/Web/Lists(guid'82dcfcc5-e58c-4610-b4c3-589a7228e912')",
"etag": "\"0\"",
"type": "SP.List"
},
"AllowContentTypes": true,
"BaseTemplate": 125
},
//...
]
}
}

Resources