I created an Azure AD account to test SSO. I was able to get Apache to authenticate a site using SSO and pass the authenticated user's email address as a header. I'm having trouble getting the "groups" claim to be passed through.
My Apache config looks as follows:
LoadModule auth_openidc_module /usr/lib64/httpd/modules/mod_auth_openidc.so
<IfModule mod_auth_openidc.c>
OIDCProviderMetadataURL https://sts.windows.net/<removed>/.well-known/openid-configuration
OIDCClientID <removed>
OIDCClientSecret <removed>
OIDCRedirectURI https://<removed>/redirect_uri
OIDCResponseType code
OIDCScope "openid email profile groups family_name given_name"
OIDCSSLValidateServer Off
OIDCCryptoPassphrase <removed>
OIDCPassClaimsAs headers
OIDCClaimPrefix USERINFO_
OIDCRemoteUserClaim email
OIDCPassUserInfoAs claims
OIDCAuthNHeader USER
OIDCPassIDTokenAs claims
OIDCPassRefreshToken On
</IfModule>
My Optional claims in Azure AD looks like this:
Additionally I created a group in AD called "Users" and added myself to that group. So I would expect to see "Users" passed as some sort of attribute in the headers.
If I print the HTTP headers on the server I see this...
CONTEXT_DOCUMENT_ROOT: /var/httpd/cgi-bin/
CONTEXT_PREFIX: /cgi-bin/
DOCUMENT_ROOT: /var/SP/httpd/htdocs/docs
GATEWAY_INTERFACE: CGI/1.1
HTTP_ACCEPT: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
HTTP_ACCEPT_ENCODING: gzip, deflate, br
HTTP_ACCEPT_LANGUAGE: en-GB,en-US;q=0.9,en;q=0.8
HTTP_CACHE_CONTROL: max-age=0
HTTP_COOKIE: _ga=GA1.2.601634409.1596125029; mod_auth_openidc_session=c186c9d6-eebe-11ea-8429-7982f43b32a7
HTTP_HOST: <removed>
HTTP_SEC_FETCH_DEST: document
HTTP_SEC_FETCH_MODE: navigate
HTTP_SEC_FETCH_SITE: none
HTTP_SEC_FETCH_USER: ?1
HTTP_UPGRADE_INSECURE_REQUESTS: 1
HTTP_USER: <removed>
HTTP_USER_AGENT: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36
HTTP_X_AMZN_TRACE_ID: Root=1-5f52559a-6b8b464ec338a6097565fce0
HTTP_X_FORWARDED_FOR: <removed>
HTTP_X_FORWARDED_PORT: 443
HTTP_X_FORWARDED_PROTO: https
LD_LIBRARY_PATH: /opt/apache-2.4/lib64
PATH: /sbin:/usr/sbin:/bin:/usr/bin
QUERY_STRING:
REMOTE_ADDR: <removed>
REMOTE_PORT: 45364
REMOTE_USER: <removed>
REQUEST_METHOD: GET
REQUEST_SCHEME: http
REQUEST_URI: /cgi-bin/headers.cgi
SCRIPT_FILENAME: /var/httpd/cgi-bin/headers.cgi
SCRIPT_NAME: /cgi-bin/headers.cgi
SCRIPT_URI: http://<removed>/cgi-bin/headers.cgi
SCRIPT_URL: /cgi-bin/headers.cgi
SERVER_ADDR: <removed>
SERVER_ADMIN: <removed>
SERVER_NAME: <removed>
SERVER_PORT: 80
SERVER_PROTOCOL: HTTP/1.1
SERVER_SIGNATURE:
SERVER_SOFTWARE: Apache/2.4.46 (Unix) OpenSSL/1.1.1c
X_REMOTE_USER: <removed>
The REMOTE_USER, X_REMOTE_USER and HTTP_USER all show the correct authenticated user email.
I don't see anything related to "groups", "USERINFO_", "family_name", "given_name". Not even blank placeholders.
I'm a bit stuck as the Apache config looks okay as far as I can tell and from what I have read the Azure configuration is okay as well.
Any ideas why the claims are not being passed through?
I changed:
OIDCPassClaimsAs headers
to:
OIDCPassClaimsAs both
... and it worked!
Related
I'm uploading my svg files to my local minio server (running in docker).
const uploadedFile = await client.putObject(bucketName, filename, readStream);
I then generate a public URL e.g. http://localhost:9000/link-identifiers/example.svg and I can download the files from there publicly
If I want to display it in the browser like <img src={picUrl}>, the images don't render at all.
I get those Response Headers in the browser:
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 124775
Content-Security-Policy: block-all-mixed-content
Content-Type: application/octet-stream
ETag: "109be6a37b9091e50651b3cbbe6bed3a"
Last-Modified: Wed, 02 Sep 2020 06:44:28 GMT
Server: MinIO/RELEASE.2020-08-07T01-23-07Z
Vary: Origin
X-Amz-Request-Id: 1630E4E87DF71408
X-Xss-Protection: 1; mode=block
Date: Wed, 02 Sep 2020 06:52:34 GMT
Is there any additional configuration for the minio server I need in order to make images render?
If I'm able to download them & and they're perfectly fine (when viewing them), shouldn't they be able to render in the browser too?
Currently the permissions for the bucket are set to public with:
mc policy set public myminio/link-identifiers
The putObject function takes in an optional metadata argument.
const metadata = {
'Content-type': 'image',
};
Pass it on as an argument to the function to be able to render images.
download mc.exe from this link minio download
or click this link mc. exe
after download
open the command line where mc.exe is and enter this command
mc.exe alias set my mini http://192.168.1.101:9000 mini admin mini admin
or
mc.exe alias set my mini [your mini server URL] [minio username] [minio password]
after that, your browser can load images or file like this
I am building a web service for ONVIF camera using gSoap.
I have generated the header and the source files using the core wdsl provided by ONVIF at https://www.onvif.org/profiles/specifications/.
However, every time i make a request from the client i get the below error in the function soap_begin_serve(soap):
SOAP 1.2 fault SOAP-ENV:MustUnderstand[no subcode]
"The data in element 'Security' must be understood but cannot be processed"
What does the above error means and how can i fix it?
EDIT: This is what i receive at the camera side:
POST / HTTP/1.1
Content-Type: application/soap+xml; charset=utf-8; action="http://www.onvif.org/ver10/device/wsdl/GetSystemDateAndTime"
Host: localhost:8090
Content-Length: 261
Accept-Encoding: gzip, deflate
Connection: Close
<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope"><s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><GetSystemDateAndTime xmlns="http://www.onvif.org/ver10/device/wsdl"/></s:Body></s:Envelope>POST / HTTP/1.1
Content-Type: application/soap+xml; charset=utf-8; action="http://www.onvif.org/ver10/device/wsdl/GetScopes"
Host: localhost:8090
Content-Length: 905
Accept-Encoding: gzip, deflate
Connection: Close
<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope"><s:Header><Security s:mustUnderstand="1" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"><UsernameToken><Username>admin</Username><Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordDigest">WFz21zL8rch8LRoxAPzgHRMBbr0=</Password><Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary">9y79ka0xD02oCIw6GAoIPwEAAAAAAA==</Nonce><Created xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2017-05-21T08:15:58.902Z</Created></UsernameToken></Security></s:Header><s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><GetScopes xmlns="http://www.onvif.org/ver10/device/wsdl"/></s:Body></s:Envelope>
SOAP 1.2 fault SOAP-ENV:MustUnderstand[no subcode]
"The data in element 'Security' must be understood but cannot be processed"
This means that you will need to enable WS-Security to authenticate:
add #import "wsse.h" to the .h header file (aka. service and data binding "interface file") for soapcpp2 to process.
in your source code, #include "plugin/wsseapi.h"
in your source code, provide the user credentials before sending the request with soap_wsse_add_UsernameTokenDigest(soap, NULL, "username", "password");
compile the source code with compiler flag -DWITH_OPENSSL and compile your application code base together with plugin/wsseapi.c, plugin/smdevp.c, and plugin/mecevp.c (the plugin directory is in the gSOAP distro path), and of course also compile stdsoap2.c or stdsoap2.cpp and other generated files;
link with OpenSSL -lssl -lcrypto, and perhaps -lz if compression is desired;
when using the full WS-Security plugin capabilities with gSOAP (digital signature and/or encryption), you should compile all your source code with compiler option -DWITH_OPENSSL -DWITH_DOM -DWITH_GZIP and also compile dom.c or dom.cpp together with your code.
See also the WS-Security plugin for gSOAP.
Hope this helps.
I have a link for a GitHub repository and I'm using github3 with Python in order to try and search for it.
Take this link for example:
https://github.com/GabrielGrimberg/OOP-Assignment1-UI
If you go to it, you will see that it redirects to
https://github.com/GabrielGrimberg/RuneScape-UI
And thus, I can't figure out how to construct a search query that will find this specific repo.
I've tried:
GabrielGrimberg/OOP-Assignment1-UI in:url
GabrielGrimberg/OOP-Assignment1-UI
GabrielGrimberg/OOP-Assignment1-UI in:full_name
According to Github blog if a repo is renamed the old address is redirected to new address!
We're happy to announce that starting today, we'll automatically redirect all requests for previous repository locations to their new home in these circumstances. There's nothing special you have to do. Just rename away and we'll take care of the rest.
Moreover you can check Gabriel Grimberg does not have any repo named "OOP-Assignment1-UI".
Corrected answer:
If we can first check repo details to make sure it exists/where it has moved!
Check out the following query:
curl -i https://github.com/GabrielGrimberg/OOP-Assignment1-UI
You can get the url where it moved from the header
HTTP/1.1 301 Moved Permanently
Server: GitHub.com
Date: Sun, 12 Feb 2017 18:19:25 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Status: 301 Moved Permanently
Cache-Control: no-cache
Vary: X-PJAX
Location: https://github.com/GabrielGrimberg/RuneScape-UI
X-UA-Compatible: IE=Edge,chrome=1
If the repo already existed there it would have given you the content instead of the header!
For example , try this:
curl -i https://github.com/GabrielGrimberg/RuneScape-UI
Basically you need to make a request yourself and check for the redirection if the first search provided no result.
def get_redirection(full_name):
try:
json_object = json.loads(urllib.request.urlopen('https://api.github.com/repos/{0}'.format(full_name)).read().decode('utf-8'))
except urllib.error.HTTPError:
return None
return json_object["full_name"] # Will return the new full-name of the project
I am reading the Processing Apache Logs example in the Logstash Configuration Examples section of the Logstash Reference [1.5]. One of the sentences goes:
"Any additional lines logged to this file will also be captured,
processed by Logstash as events, and stored in Elasticsearch."
I am trying to implement it by adding one more line to the log file being monitored while the Logstash shutdown has NOT completed. And that is basically what I meant by "real-time" in the question title.
Below is how I actually tried it:
Step 1. Pass in logstash-apache.conf to Logstash
The version of the Logstash I'm using is 1.5.4. And the code for logstash-apache.conf is:
input {
file {
path => "/your/path/to/the/log/file"
start_position => "beginning"
type => "apache_access"
}
}
filter {
if [path] =~ "access" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}
date {
match => ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
}
}
output {
elasticsearch {
host => localhost
protocol => "http"
port => "9200"
}
stdout { codec => rubydebug }
}
The conf file is almost the same as the example. However, the type of "apache_access" is added to the file input plugin instead of being put in the mutate filter plugin, per reading the explanation on the site. Please replace the path within the file input plugin with yours.
For your convenience, the sample log is provided here:
71.141.244.242 - kurt [18/May/2011:01:48:10 -0700] "GET /admin HTTP/1.1" 301 566 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3"
134.39.72.245 - - [18/May/2011:12:40:18 -0700] "GET /favicon.ico HTTP/1.1" 200 1189 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2; .NET4.0C; .NET4.0E)"
98.83.179.51 - - [18/May/2011:19:35:08 -0700] "GET /css/main.css HTTP/1.1" 200 1837 "http://www.safesand.com/information.htm" "Mozilla/5.0 (Windows NT 6.0; WOW64; rv:2.0.1) Gecko/20100101 Firefox/4.0.1"
After Logstash's processing, the standard out has 3 results in the rubydebug format, which can be seen in the uploaded image (of course, these 3 are also indexed in Elasticsearch):
Image of 3 results appearing in the standard out in the rubydebug format after Logstash's processing
Please be noted that the pipeline generated by the conf file has not been shutdown at this point.
Step 2. Add one more line of log to the file using the text editor in the server and save the change
This is the line I add, which should be the 4th line in the log file:
71.141.244.242 - kurt [18/May/2011:01:48:10 -0700] "GET /admin HTTP/1.1" 301 566 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3"
After doing this, I expected to have one more result showing in the standard out because I believe the file input plugin can be configured to do so as the file input plugin section in the same reference says:
The plugin aims to track changing files and emit new content as it’s
appended to each file.
Unfortunately, nothing happened.
Am I on the wrong track, and doing the whole thing wrong? If not, could anyone here help me achieve what I intend to do, and possibly explain the mechanism behind it? Any help will be greatly appreciated.
I have this image from Filepicker.io: https://www.filepicker.io/api/file/9H-1AxgZTwqct8tjkmkZ
But when I open it in the browser, it will download the file directly, I thought that's because the response header or something, so I'm wondering how to proxy it so that I can view it in browser like other images, like this one : https://distilleryimage1.s3.amazonaws.com/84d490a4071811e285a622000a1d039f_5.jpg
curl -si https://www.filepicker.io/api/file/9H-1AxgZTwqct8tjkmkZ | head
HTTP/1.1 200 OK
Access-Control-Allow-Headers: CONTENT-TYPE, X-NO-STREAM
Access-Control-Allow-Methods: DELETE, GET, HEAD, POST, PUT
Access-Control-Allow-Origin: *
Access-Control-Max-Age: 21600
Cache-Control: public, max-age=315360000, no-transform
Content-Disposition: attachment; filename="中秋福利.jpg"
Content-Type: image/jpeg
Date: Fri, 28 Sep 2012 08:21:45 GMT
Server: gunicorn/0.14.6
Content-Disposition is set to attachment. If you proxy it then remove that header altogether or set it to inline.
While vinayr's answer is correct, you can avoid using a proxy altogether by appending ?dl=false to the end of your FilePicker URI.
For example: https://www.filepicker.io/api/file/9H-1AxgZTwqct8tjkmkZ?dl=false
There are also a number of other in the FilePicker Documentation, particularly the "Working with FPUrls" section and the "Retrieving the file" and "Image Conversion" subsections.
Github uses https://github.com/atmos/camo to proxy images for SSL. You can try using it. You can mount it on your express app:
var camo = require('./node_modules/server.js') // you have to strip the server.listen(port) part
app.use('/proxy', camo)