LetsEncrypt-ACMESharp http-01 challenge on IIS invalid - iis

On server A (non-IIS) I executed:
Import-Module ACMESharp
Initialize-ACMEVault
New-ACMERegistration -Contacts mailto:somebody#derryloran.com -AcceptTos
New-ACMEIdentifier -Dns www.derryloran.com -Alias dns1
Complete-ACMEChallenge dns1 -ChallengeType http-01 -Handler manual
Response back asked:
* Handle Time: [08/05/2017 22:46:27]
* Challenge Token: [BkqO-eYZ5sjgl9Uf3XpM5_s6e5OEgCj9FimuyPACOhI]
To complete this Challenge please create a new file
under the server that is responding to the hostname
and path given with the following characteristics:
* HTTP URL: [http://www.derryloran.com/.well-known/acme-challenge/BkqO-eYZ5sjgl9Uf3XpM5_s6e5OEgCj9FimuyPACOhI]
* File Path: [.well-known/acme-challenge/BkqO-eYZ5sjgl9Uf3XpM5_s6e5OEgCj9FimuyPACOhI]
* File Content: [BkqO-eYZ5sjgl9Uf3XpM5_s6e5OEgCj9FimuyPACOhI.X-01XUeWTE-LgpxWF4D-W_ZvEfu6ue2fAd7DJNhomQM]
* MIME Type: [text/plain]
Server B is serving www.derryloran.com a page at http://www.derryloran.com/.well-known/acme-challenge/BkqO-eYZ5sjgl9Uf3XpM5_s6e5OEgCj9FimuyPACOhI correctly I believe but when I then, back on Server A execute:
Submit-ACMEChallenge dns1 -ChallengeType http-01
(Update-ACMEIdentifier dns1 -ChallengeType http-01).Challenges | Where-Object {$_.Type -eq "http-01"}
...but the status goes invalid after a few seconds. FWIW I've tried this several times always with same result. Why? What am I doing wrong?
I appreciate there's a lot more to go once I've got the certificate but the site is being served in a docker container hence the Server A/B complexities...

Omg, how many times?!? The file had a BOM when created in VS. Recreating using Notepad++ and saving as UTF-8 (without BOM) and I'm getting a valid response now.

Related

Download multiple file using wget by looping through a text file of IDs

I am trying to download multiple files using wget. I have a text file containing the ID of the files that I want to download (mannifest.tsv, one line for one ID).
Currently, I am using the below command:
while read id; do wget https://target-data.nci.nih.gov/Public/AML/miRNA-seq/L3/expression/BCCA/TARGET-FHCRC/$id.txt; done < manifest.tsv
However, I got the following error:
--2022-08-12 23:43:28-- https://target-data.nci.nih.gov/Public/AML/miRNA-seq/L3/expression/BCCA/TARGET-FHCRC/TARGET-00-BM3897-14A-01R.isoform.quantification%0D.txt
Resolving target-data.nci.nih.gov... 129.43.254.217, 2607:f220:41d:21c1::812b:fed9
Connecting to target-data.nci.nih.gov|129.43.254.217|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2022-08-12 23:43:30 ERROR 404: Not Found.
Probably because when I loop through manifest.tsv file, the new line character was also read, therefore, the file ID is not correct anymore.
Could someone help me? I really appreciate!

Packets don't have 'http' layer available

**Hi all,
I am learning online about network packets. I came across 'Scapy' in python. I am supposed to have 'Http' section the packet results available in terminal. For some reason I don't see '###[ HTTP ]###' for some sites. In the video that I am learning from, the tutor is using the same code but he sees 'http' for every single site he browses on, but I can't duplicate his results.
I have python 2.7.18 and python 3.9.9 in my Kali. I tried using both 'python' and 'python3' header when calling the program in terminal(no change in finding 'http' layer in packers).
I am capturing some of the http packets but not all. I have been working on a python code on my Kali VM that would look for the packets transmission for Urls and login info and display those URL of in the Terminal. The Tutorial had pretty much my expected result but I don't have the same result. In Tutorial coach was doing the same as I did(Go to Bing, open a random image )
Am I doing something wrong...? I would appreciate help on this issue please.**
...
# CODE:
#!/usr/bin/env python
import scapy.all as scapy
from scapy.layers import http
def sniff(interface):
scapy.sniff(iface=interface, store=False, prn=process_sniffed_packet) #prn = call back function, udp= audio and
def get_url(packet):
return packet[http.HTTPRequest].Host + packet[http.HTTPRequest].Path
def get_login_info(packet):
if packet.haslayer(scapy.Raw): # When used, it will only show the packet with username and password.
load = packet[scapy.Raw].load
keywords = ["uname", "username", "user", "pass", "password", "login", "Email"]
for keyword in keywords:
if keyword in str(load):
return load
def process_sniffed_packet(packet):
#print(packet.show())
if packet.haslayer(http.HTTPRequest):
#print(packet.show())
URL = get_url(packet)
print("[+] HTTP >> " + str(URL))
login_info = get_login_info(packet)
if login_info:
print("\n\nPossible username and Password > " + str(login_info) + "\n\n")
sniff("eth0") # This is connected to the internet
...
RESULT IN TERMINAL: I was browsing to Bing.com and opening a random Image.
I have used print(packet.show()) for Final Image that I browsed. In tutorial there was a ###HTTP### Layer, but I didn't have that layer.Image of Packer info for a randowm Image
โ”Œโ”€โ”€(venv)โ”€(root๐Ÿ’€kali)-[~/PycharmProjects/hello]
โ””โ”€# python packet_sniffer.py
[+] HTTP >> b'ocsp.digicert.com/'
[+] HTTP >> b'ocsp.pki.goog/gts1c3'
[+] HTTP >> b'ocsp.pki.goog/gts1c3'
[+] HTTP >> b'ocsp.pki.goog/gts1c3'
[+] HTTP >> b'ocsp.pki.goog/gts1c3'
[+] HTTP >> b'ocsp.pki.goog/gts1c3'
[+] HTTP >> b'ocsp.pki.goog/gts1c3'
[+] HTTP >> b'ocsp.digicert.com/'
^C
My Expectation: These are exactly the URLs That I visited for above result.
โ”Œโ”€โ”€(venv)โ”€(root๐Ÿ’€kali)-[~/PycharmProjects/hello]
โ””โ”€# python packet_sniffer.py
[+] HTTP >> file:///usr/share/kali-defaults/web/homepage.html
[+] HTTP >> https://www.google.com/search?client=firefox-b-1-e&q=bing
[+] HTTP >> https://www.bing.com/
[+] HTTP >> https://www.bing.com/search?q=test&qs=HS&sc=8-0&cvid=75111DD366884A028FE0E0D9383A29CD&FORM=QBLH&sp=1
[+] HTTP >> https://www.bing.com/images/search?`view=detailV2&ccid=3QI4G5yZ&id=F8B496EB517D80EFD809FCD1EF576F85DDD3A8EE&thid=OIP.3QI4G5yZS31HKo6043_GlAHaEU&mediaurl=https%3a%2f%2fwww.hrt.org%2fwp-content%2fuploads%2f2018%2f01%2fGenetic-Testing-Test-DNA-for-Genetic-Mutations-Telomeres-Genes-and-Proteins-for-Risk-1.jpg&cdnurl=https%3a%2f%2fth.bing.com%2fth%2fid%2fR.dd02381b9c994b7d472a8eb4e37fc694%3frik%3d7qjT3YVvV%252b%252fR%252fA%26pid%3dImgRaw%26r%3d0&exph=3500&expw=6000&q=test&simid=608028087796855450&FORM=IRPRST&ck=326502E72BC539777664412003B5BAC2&selectedIndex=80&ajaxhist=0&ajaxserp=0`
^C
...
I was running into a similar issue, which turned out to be that the HTTP/1.0 packets I was attempting to analyze were not being sent over PORT 80. Instead, my packets were being sent over PORT 5000.
It appears that the scapy implementation by default only interprets packets as http when they are sent on PORT 80.
I found the following snippet in this response to a GitHub Issue (for a package which should not be installed, per Cukic0d in their answer to a similar question here).
scapy.packet.bind_layers(TCP, HTTP, dport=5000)
scapy.packet.bind_layers(TCP, HTTP, sport=5000)
Adding this snippet before my call to sniff() resolved my issue and allowed me to proceed.
Hope this helps.

Directadmin upload logo using CMD_SKINS with error cannot get mime-type

I'm trying to upload a custom logo using CMD_API_SKINS
Here is my full code using curl + bash:
#/bin/bash
# DA needs this path
mkdir -p /home/tmp
# Assume my logo file is already here:
default_logo_file_home="/home/tmp/logo.png"
# The logo file is set to nobody:nogroup
chown nobody:nogroup "${default_logo_file_home}"
## Setup query data for curl:
username="admin"
password="12321aa"
da_port="2222"
host_server="server.domain.com"
ssl="https"
skin_name="evolution"
command="CMD_API_SKINS"
data="action=upload_logo&file=${default_logo_file_home}&json=yes&name=${skin_name}&which=1"
method="POST"
curl --silent --request "${method}" --user "${username}":"${password}" --data "${data}" "${ssl}://${host_server}:${da_port}/${command}"
When debugging this API, I got an error like this:
text='An Error Occurred'
result='Cannot get mime-type for log<br>
It seems like DA is trying to parse and obtain the extension name for the file "logo.png" but it couldn't
Full error logs:
DirectAdmin 1.61.5
Accepting Connections on port 2222
Sockets::handshake - begin
Sockets::handshake - end
/CMD_API_SKINS
0: Accept: */*
1: Authorization: Basic bWF4aW93bng3OnhGVEVHe***jUSg/UTRTfVdHYW0+fWNURn5ATWN***HFbZGpMezlQZ***=
2: Content-Length: 75
3: Content-Type: application/x-www-form-urlencoded
4: Host: server.domain.com:2222
5: User-Agent: curl/7.75.0
Post string: action=upload_logo&file=/home/tmp/logo2.png&json=yes&name=evolution&which=2
auth.authenticated
User::deny_override:/CMD_API_SKINS: call_level=2, depth1: aborting due to do depth
User::deny_override:/CMD_DOMAIN: call_level=2, depth1: aborting due to do depth
User::deny_override:/CMD_DOMAIN: call_level=1, depth2: aborting due to do depth
Plugin::addHooks: start
Plugin::addHooks: end
Command::doCommand(/CMD_API_SKINS)
cannot get mime type for log
Dynamic(api=1, error=1):
text='An Error Occurred'
result='Cannot get mime-type for log<br>
'
Command::doCommand(/CMD_API_SKINS) : finished
Command::run: finished /CMD_API_SKINS
I also try to encode the query like this but still got the same error
default_logo_file_home="%2Fhome%2Ftmp%2Flogo%2Epng"
data="action=upload%5Flogo&file=${default_logo_file_home}&json=yes&name=${skin_name}&which=%31"
Is there any explanations what is going on here? Is it possible to upload a logo using this API ?
Ok, I found the trick which is not documented anywhere. The uploaded file need to have a random string append to it. So I changed this:
default_logo_file_home="/home/tmp/logo.png"
into this:
RANDOM_STR="EbYIES"
default_logo_file_home="/home/tmp/logo.png${RANDOM_STR}"
Now it's working perfectly

Gitlab misdetects binary file with text file and raises Internal Error (500 Whoops)

What is Issue?
When I push a link of the commit which invlolves a binary file from Commits view of a project on Gitlab, I recieve an Internal error ,"500 Whoops, something went wrong on our end."
This issue also appears when creating Merge Request whose origin is the same commit above.
Production.log says,
Started GET "/TempTest/bsp/commit/3098a49f2fd1c77be0c383994aa6655f5d15ebf8" for 127.0.0.1 at 2016-05-30 16:17:15 +0900
Processing by Projects::CommitController#show as HTML
Parameters:{"namespace_id"=>"TempTest", "project_id"=>"bsp", "id"=>"3098a49f2fd1c77be0c383994aa6655f5d15ebf8"}
Encoding::CompatibilityError (incompatible character encodings: UTF-8 and ASCII-8BIT):
app/views/projects/diffs/_file.html.haml:54:in `_app_views_projects_diffs__file_html_haml__1070266479743635718_49404820'
app/views/projects/diffs/_diffs.html.haml:22:in `block in _app_views_projects_diffs__diffs_html_haml__2984561770205002953_48487320'
app/views/projects/diffs/_diffs.html.haml:17:in `each_with_index'
app/views/projects/diffs/_diffs.html.haml:17:in `_app_views_projects_diffs__diffs_html_haml__2984561770205002953_48487320'
app/views/projects/commit/show.html.haml:12:in `_app_views_projects_commit_show_html_haml__3333221152053087461_45612480'
app/controllers/projects/commit_controller.rb:30:in `show'
lib/gitlab/middleware/go.rb:16:in `call'
Completed 500 Internal Server Error in 210ms (Views: 8.7ms | ActiveRecord: 10.5ms)
Gtilab seems to misdetect binary file with text file.
So HTML formatting engine seems to meet an error.("Encoding::CompatibilityError")
It's ok for me that Gitlab sometimes misdetects binary file with text file, but problem is that Gitlab server stops the transaction by Internal Error when such a misdetects occurs.
Could anyone tell me how to continue server transaction even if such a misjudge occurs?
For example, I assume the following answer.
e.g.1) Force to recognize a file to be a binary.
e.g.2) Bypass a HTML transforming when such a error occurs.
What I tried to resolve.
I added the description '*.XXX binary' to .gitattribute to confirm whether I can let a certain file recognize that it was binary file for Gitlab forcibly.
The Git client recognized the file to be binary file, and the diff did not output a text. However, there was no effect in Gitlab even if I did push it.
versions info
I faced this issue at first on Gitlab 8.6.2, but same issue occurs on 8.8.3.
I use git-2.7.2
Thank you.

Malformed URL: '', skipping (java.net.MalformedURLException

i crawl sites with nutch 1.3. i see this exception in my log when nutch crawl my sites:
Malformed URL: '', skipping (java.net.MalformedURLException: no protocol:
at java.net.URL.<init>(URL.java:567)
at java.net.URL.<init>(URL.java:464)
at java.net.URL.<init>(URL.java:413)
at org.apache.nutch.crawl.Generator$Selector.reduce(Generator.java:247)
at org.apache.nutch.crawl.Generator$Selector.reduce(Generator.java:109)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:463)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216)
)
how can i solve this? help me.
According to the docs.
"MalformedURLException is thrown to indicate that a malformed URL has occurred. Either no legal protocol could be found in a specification string or the string could not be parsed."
The thing to be noted here is that this exception is not thrown when the server is down or when the path points to a missing file. It occurs only when URL cannot be parsed.
The error indicates that there is no protocol. and also the crawler does not see any URL,
Malformed URL: '' , skipping (java.net.MalformedURLException: no protocol:
Here is interesting article that I came across, have a look http://www.symphonious.net/2007/03/29/javaneturl-or-javaneturi/
What is the exact URL you are trying to parse?
After having set all setting with regex-urlfilter.txt and seed.txt try this command:
./nutch plugin protocol-file org.apache.nutch.protocol.file.File file:\\\e:\\test.html
(if the file is located at e:\test.htm in my example.
Before this, I always ran this
./nutch plugin protocol-file org.apache.nutch.protocol.file.File \\\e:\test.html
and got this error, because the protocol file: was missing:
java.netMalformedURLException : no protocol : \\e:\test.html
Malformed URL: ''
means that the URL was empty instead of being something like http://www.google.com.

Resources